Next Article in Journal
Semantic Priority Navigation for Energy-Aware Mining Robots
Previous Article in Journal
Performance and Efficiency Gains of NPU-Based Servers over GPUs for AI Model Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Evaluation of an Immersive Metaverse-Based Meditation System for Psychological Well-Being Using LLM-Driven Scenario Generation

1
Department of Autonomous Things Intelligence, Dongguk University-Seoul, 30 Pildongro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
2
Department of Computer Science and Artificial Intelligence, Dongguk University-Seoul, 30 Pildongro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
3
NUI/NUX Platform Research Center, Dongguk University-Seoul, 30 Pildongro-1-gil, Jung-gu, Seoul 04620, Republic of Korea
4
Industrial Artificial Intelligence Researcher Center, Dongguk University-Seoul, 30 Pildongro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
5
Department of Computer Science and Artificial Intelligence, College of Advanced Convergence Engineering, Dongguk University-Seoul, 30 Pildongro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Systems 2025, 13(9), 798; https://doi.org/10.3390/systems13090798
Submission received: 11 August 2025 / Revised: 8 September 2025 / Accepted: 9 September 2025 / Published: 11 September 2025

Abstract

The increasing prevalence of mental health disorders highlights the need for innovative and accessible interventions. Although existing digital meditation applications offer valuable basic guidance, they often lack interactivity, real-time personalized feedback, and dynamic simulation of real-life scenarios necessary for comprehensive experiential training applicable to daily stressors. To address these limitations, this study developed a novel immersive meditation system specifically designed for deployment within a metaverse environment. The system provides mindfulness practice through two distinct modules within the virtual world. The experience-based module delivers AI-driven social interactions within simulated everyday scenarios, with narrative content dynamically generated by large language models (LLMs), followed by guided inner reflection, thereby forming a scenario–experience–reflection cycle. The breathing-focused module provides real-time feedback through a breath-synchronization interface to enhance respiratory awareness. The feasibility and preliminary effects of this metaverse-based system were explored in a two-week, single-group, pre-test/post-test study involving 31 participants. The participants completed a battery of validated psychological questionnaires assessing psychological distress, mindfulness, acceptance, self-compassion, and self-esteem before and after engaging in the intervention. This study provides exploratory evidence supporting the feasibility and potential of immersive metaverse environments and LLM-based scenario generation for structured mental health interventions, providing initial insights into their psychological impact and user experience.

1. Introduction

In recent years, the prevalence of mental health disorders has continued to rise, becoming a major global public health concern. According to a recent large-scale study involving over 150,000 people across 29 countries, approximately 50% of the global population is expected to experience at least one mental illness by the age of 75 [1]. Although pharmacological and psychotherapeutic treatments remain foundational, they are often limited by side effects, adherence challenges, cost, and accessibility barriers [2,3,4]. This highlights the urgent need for scalable, non-pharmacological interventions that empower individuals to develop emotional regulation and stress management skills for everyday life.
In response to the mental health challenge, non-pharmacological interventions, such as mindfulness and meditation, have gained significant attention as effective tools for self-regulation, stress relief, and emotional stabilization [5]. Mindfulness-based practices have gained substantial empirical support as effective tools for stress reduction, emotional stabilization, and psychological flexibility [6,7,8,9,10,11,12,13].
However, traditional meditation applications often feature static and non-personalized designs that cannot simulate real-life scenarios or adapt to individual contexts. This limits their effectiveness in helping users apply meditation skills to stressful everyday situations. To overcome these limitations, attention has shifted toward immersive digital technologies that provide multisensory, personalized, and interactive environments. Digital mindfulness interventions, including popular apps such as Calm, Headspace, and Kokkiri, have expanded access to guided meditation and breathing practices. While some now incorporate breath pacing and haptic feedback, most still provide limited interactivity and lack fully immersive, adaptive environments with real-time feedback synchronized to user actions. Opportunities for practicing skills in realistic, dynamic scenarios remain rare.
To address these gaps, this study developed a novel metaverse-based meditation system to advance the capabilities of digital mental health interventions and explore more immersive approaches to mindfulness training. Designed for compatibility with both computer and mobile platforms, this system moves beyond traditional passive guidance by integrating structured mindfulness and breathing training within dynamic virtual environments. The system facilitates active engagement through key interactive components. These include AI-driven simulations of real-life social interactions designed to elicit emotional and cognitive responses. Scenario content is dynamically generated by Large Language Models (LLMs), providing diverse and contextually appropriate narratives, significantly reducing manual scenario design costs, and enabling endless variations. A sophisticated breathing synchronization system offers personalized visual and auditory feedback to enhance focus on breath awareness. Furthermore, the system incorporates dedicated spaces and tools for self-reflection, including journaling features, allowing users to process their experiences and cultivate psychological insights.
This study examines the preliminary psychological effects of the proposed metaverse-based meditation system. A two-week intervention study was conducted with 31 participants to assess the system’s impact on mental health and mindfulness-related constructs. A comprehensive set of internationally standardized psychological scales was administered pre- and post-intervention. These included measures of psychological distress, mindfulness, psychological flexibility, self-compassion, and self-esteem, along with a custom-designed measure evaluating user suitability and satisfaction with the metaverse mindfulness-meditation system; in total, ten questionnaire-based assessment indicators were employed.
This study significantly contributes to the evolving fields of digital mental health interventions and meditation technology. First, we developed a metaverse-based mindfulness meditation system that allows users to practice meditation in realistic virtual environments without being limited by time and space. Second, our system includes an Experience-Based Mindfulness module that generates diverse, contextually appropriate scenarios, automatically using an LLM-driven AI system, greatly reducing scenario design cost and time. Third, a Breathing-Focused Meditation module offers users a personalized breathing guide in tranquil virtual spaces, helping them maintain concentration and achieve a deeper state of relaxation. Finally, the feasibility and preliminary impact of the proposed metaverse system on mindfulness skills were evaluated using standardized psychological measurement tools in a two-week, single-group, pre-test/post-test study with 31 participants.
The remainder of this paper is organized as follows: Section 2 discusses existing research on the definition of mindfulness and immersive digital technologies for mindfulness meditation. Section 3 describes the development, architecture, core features, and experimental framework of the metaverse mindfulness meditation system. Section 4 explains the experimental methods, including participant demographics, measurement tools, and data-collection procedures. Section 5 presents data analysis results, discusses the limitations, and outlines future directions.

2. Literature Review

2.1. Definition and Practice of Mindfulness

Mindfulness—broadly defined as paying attention in a purposeful, present, and nonjudgmental manner—originates from contemplative traditions and has gained significant recognition in modern psychology as a means of enhancing mental well-being [14]. Mindfulness practice encompasses diverse techniques and activities aimed at cultivating awareness. Common approaches include formal seated meditation focusing on breath or bodily sensations; body scans to systematically bring awareness to different parts of the body; walking meditation; mindful eating; and open awareness practices that involve observing thoughts, feelings, and external stimuli without attachment or judgment [15]. Another empirically supported form of mindfulness meditation is Loving-Kindness Meditation (LKM), or metta meditation, which aims to cultivate positive emotions such as kindness and compassion toward oneself and others [16]. Studies show that LKM reliably enhances positive emotions and personal resources, leading to improved well-being and reduced depressive symptoms across diverse populations [17,18].
While foundational practices establish attentional control and awareness, applying these skills in real-world social interactions and stressful situations requires targeted training [19]. Successfully navigating psychological distress and promoting well-being often depend on one’s ability to engage mindfully with thoughts, emotions, and external stimuli as they arise, thereby fostering psychological flexibility [20,21]. Beyond individual practice, mindfulness can be applied to interactions with others. This is referred to as interpersonal mindfulness, which involves present-moment awareness, nonjudgmental acceptance, and emotional attunement within relationships [22,23,24,25].
Reflective practices, such as journaling, also function as applied mindfulness, enabling individuals to observe and process internal experiences and thoughts in a structured manner [26,27]. Identifying and externalizing feelings represent a foundational step in cultivating emotional regulation awareness, a key element of mindfulness and a prerequisite for effective emotional regulation [28,29]. By systematically recording emotional responses, individuals can identify patterns in their reactions to different social triggers, thus fostering greater self-understanding and insight [30]. By externalizing and focusing attention on emotions felt within this guided context, users can develop a more nuanced understanding of their internal state. Such structured reflection in a safe environment supports deeper processing of experiences and enhances emotional awareness and regulation [31,32].
Among mindfulness practices, focusing on breath provides a consistent anchor for training attention and serves as an accessible tool for cultivating present-moment awareness. Research has shown that breath-focused meditation enhances the ability to sustain attention, regulates the autonomic nervous system, and contributes to reduced anxiety and improved physiological and emotional stability [33,34,35,36]. Even brief practice of breath-focused techniques can positively influence attention [37,38].
The body scan is another foundational component of mind-fulness, designed to direct attention systematically to different areas of the body. This practice helps foster physical ease and present-moment awareness through sensory observation. Evidence shows it is effective in promoting physical relaxation by encouraging mindful observation and the release of muscular tension in different body regions [39]. Engaging in body scan practices is also associated with broader mindfulness benefits, including stress reduction and improved emotional regulation [40,41].
In designing the present intervention, these evidence-based mindfulness techniques—including breath-focused meditation, body scan, scenario-based and interpersonal mindfulness, and structured emotional reflection—were systematically incorporated as key components.

2.2. Digital Technologies for Mindfulness

The increasing demand for accessible mental health resources has led to the widespread development of mindfulness and meditation digital tools. These include several commercial mobile applications, such as Calm [42], Headspace (Headspace Inc., Santa Monica, CA, USA) [43], Kokkiri (Maeum Sueop, Seoul, Republic of Korea) [44], Tide (Guangzhou Moreless Network Technology Co., Ltd., Guangzhou, China) [45], TaoMix2 (MWM, Neuilly-sur-Seine, France) [46], and Medito (Yedi70 Yazilim ve Bilgi Teknolojileri Anonim Sirketi, Istanbul, Turkey) [47], along with research-oriented systems, such as ACTing Mind [48], The Melody of the Mysterious Stones [49], Stairway to Heaven [50], and MindFlourish [51]. These systems, primarily operating on mobile or web platforms, generally focus on providing guided meditations, sleep aids, and basic stress management techniques, prioritizing ease of daily use.
Although existing digital meditation systems are accessible, they have several key limitations, particularly in supporting a comprehensive, experiential approach to mindfulness training. First, most platforms provide basic guided instructions for breathing exercises but lack real-time, personalized feedback synchronized with the user’s actual respiration. Second, most systems offer calm visual and auditory environments or backgrounds; however, these are typically static or passive. They seldom provide interactive or dynamic simulations of real-life scenarios or social interactions, thereby limiting opportunities for users to practice mindfulness in varied and potentially challenging contexts. While some mindfulness and meditation systems, such as ACTing Mind, incorporate cognitive regulation modules, most platforms do not sufficiently integrate methods for developing cognitive awareness or emotional regulation skills, specifically within interactive or experiential simulations. This creates a gap between cognitive understanding and practical, in-moment emotional processing [52]. Furthermore, existing immersive features are often limited to visual or auditory immersion; they frequently lack the spatial interaction or embodied engagement that metaverse environments can offer, hindering the application of mindfulness in more dynamic or interpersonally relevant settings [53,54]. The predominant focus remains on individual, isolated practices, often failing to incorporate interpersonal mindfulness training, which is beneficial to navigating social complexities.
To address these limitations, the potential of immersive technologies, particularly the metaverse, has gained attention in mental healthcare. The metaverse, broadly defined as a persistent, controllable digital space encompassing 3D virtual worlds, augmented reality, and real-time interaction [55,56]. The metaverse offers unique therapeutic interventions by providing highly controllable, safe, and personalized virtual spaces [57,58,59]. These environments can transcend traditional formats, allowing privacy and reduced stigma [60], real-time monitoring and feedback [61], and embodied experiences through avatars [62]. The metaverse framework facilitates a shift from simple content delivery to interactive experience and individual regulation, activating user motivation and supporting long-term behavioral changes [63]. By leveraging these capabilities, a metaverse-based system can overcome the limitations of existing digital mindfulness tools by providing dynamic and interactive scenarios for practice, real-time physiological feedback, and integrated tools for self-reflection in an immersive environment, as shown in Table 1.

2.3. LLM-Based Scenario Generation

The integration of LLMs for dynamic scenario generation has recently attracted significant research attention, particularly in the context of interactive games and virtual environments. GENEVA demonstrates the use of Generative Pre-trained Transformer (GPT) to automatically produce branching narrative graphs that match designer-specified constraints; this substantially reduces manual authoring efforts for complex storylines [64]. Similarly, NarrativeGenie employs LLMs to transform high-level narrative arcs into structured, partially ordered event sequences, enabling dynamic storytelling that adapts to player interactions in real time [65]. Word2World further expands this paradigm by procedurally generating both narrative and playable 2D game levels directly from textual prompts, showcasing the versatility of LLMs for multifaceted content creation without task-specific fine-tuning [66]. Additionally, PANGeA leverages LLMs within a modular framework for role-playing games, combining a validation system and personality models to generate contextually consistent non-player character (NPC) dialogues and events while maintaining narrative coherence during free-form player input [67]. Peng et al. [68] further demonstrate how integrating GPT-4 as the core NPC dialogue engine in a text-adventure mystery game enables players to interact freely with characters, resulting in emergent narrative nodes and unforeseen story branches that extend well beyond the designer’s original plot structure. Beyond scenario generation, recent research has applied customized LLM techniques to mental health contexts. Rasool et al. (2025) developed an emotion-aware framework that enables large language models to generate more empathetic and contextually relevant responses in psychotherapy chatbots by integrating semantic and emotional cues from previous sessions [69].
Collectively, these studies illustrate that LLM-driven scenario generation techniques have already proven effective in dynamically expanding game narratives, enriching story complexity, and replayability through adaptive, context-aware interactions. LLM-based scenario generation is applied to our proposed metaverse-based mindfulness meditation system to generate diverse, contextually appropriate scenarios. This methodology may provide valuable insights into the psychological effects and user experiences associated with advanced interactive digital platforms.

2.4. Summary

Mindfulness is a well-established practice involving various techniques that enhance mental well-being. Digital technologies have become a popular medium for delivering mindfulness content, offering accessibility and convenience. However, while existing digital meditation systems provide valuable basic guidance, they often lack the interactivity, real-time personalized feedback, and dynamic scenario simulations necessary for a comprehensive and experiential approach to mindfulness training. They primarily focus on isolated individual practices and fall short in facilitating the application of mindfulness skills to complex real-world situations and interpersonal dynamics within an integrated system. To address this gap, we present a novel metaverse-based system designed to provide a more comprehensive, interactive, and immersive platform for mindfulness training by integrating foundational breath-focused practices with dynamic experiential learning modules in a simulated environment. Our immersive metaverse-based meditation system leverages the unique capacities of the metaverse to offer a potentially richer and more engaging approach for cultivating mindfulness, with the potential to enhance mental health.

3. The Proposed Metaverse Mindfulness Meditation System

Our proposed system provides a virtual environment meticulously designed to facilitate mindfulness meditation practices through structured interactive modules. The system offers two distinct pathways for users: “experience-based mindfulness meditation” and “breathing-focused meditation”. An overview of the system is illustrated in Figure 1.
In “experience-based mindfulness meditation”, the user is prompted to select one of five distinct virtual environments. Each environment is designed to simulate common social situations using an experience-based module. Within the selected environment, users interact with AI-controlled NPCs whose behaviors and dialogues are dynamically generated by the AI control system. An LLM generates scenarios, and a management module loads scenario data to create realistic interpersonal scenarios. Then, the user is transported to the “mirror space” for inner reflection. Thereafter, the user is automatically transported back to the space they chose at the beginning.
In “breathing-focused meditation”, the user chooses from five different sets of environments. These spaces are intentionally designed to be serene and minimize distractions, containing a specialized breathing-focused module. The user is then guided through a structured protocol that includes preparatory relaxation, a guided breathing phase, and the main breathing exercise. This mode uses a specialized breathing synchronization system that requires user input via on-screen buttons synchronized with their respiration. Real-time visual feedback on breath stability is provided through a graph, and quantitative performance metrics are presented upon completion.
Figure 2 illustrates the overall architecture of the developed metaverse-based mindfulness meditation system, which is composed of static and dynamic components, a system manager, and two main operational modules, along with a performance evaluation. Static components include space design, NPC 3D models, and visual assets, providing the foundational environment and characters for user immersion. The dynamic components enable real-time interactivity and adaptability, including scenario generation powered by LLM, AI NPC control for realistic behavior, and a breathing synchronization feedback system. The functional core consists of two distinct modules: Experience-based Mindfulness and Breathing-focused Meditation. The former emphasizes realistic social engagement and emotional self-awareness through integrated reflective interfaces and interactive scenarios powered by LLM-generated narratives. The latter emphasizes physiological relaxation and breathing control and utilizes synchronized visual feedback and structured breathing protocols to foster deep respiratory awareness. Each module integrates specific performance metrics—emotional tracking in the experience-based module, and stability and concentration scores in the breathing-focused module—to objectively measure the progression and effectiveness of mindfulness skills.
This architecture ensures that the system operates seamlessly to support immersive mindfulness practice by integrating static environmental assets with dynamic AI-driven content. The following section provides a detailed description of the two core modules.

3.1. Experience-Based Mindfulness Meditation Module

The experience-based mindfulness meditation module is an essential component of the intervention. To provide users with a safe yet realistic environment for practicing applied mindfulness skills, this module uses immersive virtual settings that simulate common social and environmental scenarios. Figure 3 illustrates the interactive mechanism established between the user and the experience-based mindfulness meditation space within the metaverse meditation system. The system guides users into immersive scenarios designed for experiential mindfulness training, allowing them to dynamically engage in virtual environments and AI-driven interactions. Thereafter, reflective practices are facilitated, prompting the user to engage in structured introspection and emotional processing.
For experience-based mindfulness meditation, five distinct virtual environments served as the primary locations for immersion induction, which provides LLM-based AI scenario experiences, namely, home, interview room, subway, restaurant, and office, as shown in Figure 4.
  • Home: The home space replicates a typical family scene, with the living room and kitchen as the main activity areas. Scenarios generated for this space are centered around the theme of “family”, involving interactions between seven distinct familial roles, each defined with unique personality traits. To further enhance realism, the “emotion manager” system controls the facial expressions of NPCs to match the emotional context of family conversations.
  • Interview Room: The space was designed to replicate the atmosphere of a real-world interview waiting area, simulating the tension and anxiety that would occur in an actual interview. The generated dialogues often reflect common pre-interview anxieties, such as nervousness, self-reminders about preparation, concerns about the interviewers, and physical stress responses.
  • Subway: This environment recreates a public transit setting, specifically modeled after Korea’s Bulgwang Station, and features elements resembling the Line 3 train. To increase immersion, a realistic background audio simulating the ambient noise of a busy subway environment is included. The space is populated with several NPCs using a multi-stage crowd pathfinding system combined with collision avoidance and environmental perception algorithms. This allows users to experience rich, varied, and unpredictable interactions in public spaces.
  • Restaurant: Drawing inspiration from typical Korean dining establishments, this space simulates a busy restaurant. The space features dining tables, refrigerators, air conditioners, kiosks, and menu boards. The seating area is structured around 10 tables, with background audio of lively conversations enhancing the restaurant’s ambiance. AI-generated scenes provide independent conversations for different table groups, simulating real concurrent conversations between them.
  • Office: This space simulates a high-pressure office environment, featuring cluttered desks, work displays, scattered documents, and standard furnishings, such as bookshelves, a whiteboard, a clock, and a calendar. Unlike explicitly serene environments designed for relaxation, office spaces are intended to evoke feelings or thoughts associated with work, deadlines, or clutter.
The experience-based mindfulness meditation module has three main functions: immersion induction, reflective emotional awareness, and inner reflection.

3.1.1. Immersion Induction

The immersion induction function guides users to select one of five distinct virtual environments, each designed to replicate everyday social situations using an experience-based mindfulness module. Within the selected environment, users interact with AI-controlled NPCs, whose behaviors and dialogues are dynamically generated by the integrated AI control system. Scenario narratives are produced by an LLM, whereas a dedicated management module loads and executes scenarios to ensure realistic and contextually appropriate interpersonal interactions. Through this process, users can immerse themselves in various LLM-based AI scenarios, fostering emotional awareness by engaging with and reflecting on their emotional responses, as shown in Figure 5.
The immersion induction function utilizes AI-controlled NPCs to create dynamic and interactive social environments. To populate the diverse scenarios inherent in interpersonal interactions, a library of 48 distinct NPC models was developed. These models represent a range of demographics, including older adult individuals, children, students, and adult males and females, ensuring relevance across different map contexts. Character animation leverages assets from Mixamo [70], supplemented by custom-created animations, to achieve lifelike movements and expressions. Each NPC model possesses a repertoire of approximately 27 body motions and 4 facial animations.
An integrated space-specific AI control system manages these NPCs’ behavior and interactions to generate immersive and realistic simulations. This system differentiates between scenarios requiring structured social interactions driven by generated dialogues and environments necessitating sophisticated crowd simulations to manage large numbers of moving agents.
For environments designed to simulate structured social situations, such as the home, restaurant, and interview room, the core of dynamic content lies in the use of LLM, specifically GPT-4 (OpenAI, San Francisco, CA, USA, 2023). Recent evaluations have shown that GPT-4 has shown stronger performance in executing complex instructions, managing multilingual and cultural nuances, and generating contextually appropriate and emotionally sensitive content across varied scenarios [71]. To produce narrative content and dialogue that underpin the interactions within these experiential scenarios, LLM is guided by carefully crafted prompt engineering. These prompts are meticulously designed to elicit responses specific to the chosen virtual environment and its intended social or situational dynamics, incorporating elements such as scenario type, character roles, relationship dynamics, and desired conversational themes to ensure a relevant and contextually appropriate output. The raw text output from the LLM is then processed using a dedicated parsing tool. This tool is essential for converting the unstructured LLM response into a standardized, structured data format, typically JSON, which includes key-value pairs detailing the scenario sequence, character identities, positions, actions, emotions, and corresponding dialogue lines for the AI control system.
Figure 6 illustrates an example of how the scenario content was generated using LLM. A structured prompt containing key scenario elements, such as scenario ID, character name, position, action, emotion, and dialogue, is provided as input. Based on this input, the LLM generates a coherent output that specifies the detailed attributes for each element, producing contextually appropriate character behaviors and dialogues.
Unlike scenario-driven interactions, environments designed to depict large numbers of individuals moving simultaneously, such as the subway, employ sophisticated crowd simulation systems. This system is essential for managing the complex and dynamic movements of numerous NPC agents navigating a shared space and extends significantly beyond basic pathfinding algorithms. Instead of simply calculating the shortest route for individual agents, the crowd simulation system is built through a multi-stage path-setting process specific to the environment and focuses on managing the collective flow and interactions of many agents concurrently. It incorporates advanced algorithms for collision avoidance between characters, terrain recognition, and obstacle avoidance, enabling NPCs to move realistically within a crowded and constrained environment, such as a subway station, as shown in Figure 7. This ensures dynamic, realistic, and credible movement patterns for large groups of NPCs, contributing significantly to the immersive experience of a busy public transit environment by simulating the presence and movement of many individuals.

3.1.2. Reflective Emotion Awareness

When users gain experience through LLM-based scenario interactions, they must recognize and become aware of the emotions related to those experiences. This guided observation of internal states can deepen emotional awareness and cultivate insights into reactions, thereby complementing in situ practice and reinforcing the development of emotional regulation skills. A dedicated reflective emotion-awareness Graphical User Interface (GUI) was implemented for this purpose, allowing users to articulate their subjective experiences easily.
This interface presents a set of predefined emotional emoticons representing various feelings such as excitement, sadness, anxiety, anger, and distress, as depicted in Figure 8. Users can select up to two distinct emotional categories that best represent their current internal state. Each selected emotion can also be rated for its perceived intensity on a three-point scale, typically ranging from 1 to 3, where a rating of 3 indicates the highest level of intensity for that particular emotion. Complementary to icon selection and intensity rating, a text input area is provided, allowing users to provide a brief textual input and further elaborate on their feelings or any insights gained from the preceding interaction. This combination of selecting specific emotional categories, quantifying their intensity, and enabling free-text descriptions was designed to facilitate a more precise and nuanced capture of the user’s immediate internal state, serving as a structured prompt for initial self-reflection on the emotional impact of the simulated scenario.
This reflection process occurs twice within each session: once immediately before being teleported to the mirror space and again upon returning to the primary mindfulness map. This dual-recording approach aims to capture a user’s emotional state before and after a dedicated introspective period, potentially highlighting shifts in awareness or emotional regulation.

3.1.3. Inner Reflection

After a 3-min immersion induction experience in a dynamic social scene, users are transported to a dedicated virtual environment known as the “mirror space”. The mirror space takes the form of a tranquil wooden room or cabin. The central feature is a large, prominently displayed, ornate mirror. The overall ambiance, often enhanced by soft lighting elements such as virtual candles and hanging lamps, is curated to foster a sense of privacy and calm, which is conducive to reflection. Upon automatic teleportation from the primary interpersonal mindfulness map, the users find themselves in the focused environment. The visual of the mirror space environment is depicted in Figure 9.
Upon entry, the system retrieves the user’s previously recorded emotional state—entered during the preceding step using the reflective emotion awareness GUI—along with standard loving-kindness meditation instructions and provides these as input to GPT-4. GPT-4 then generates personalized guidance scripts, which are delivered through synthesized voice narration and corresponding visual cues displayed in the mirror. In this focused environment, users engage in a guided reflection process, prompted to settle their minds, cultivate emotional calmness, and consciously process their experiences.
At present, LLM generation is based solely on the user’s selected emotion, and the generated guidance scripts are evaluated not only for emotional recognition, but also through topic analysis and word cloud visualization to further assess content relevance and thematic alignment (see Appendix A).

3.2. Breathing-Focused Meditation Module

This section describes a specialized system designed to facilitate core breathing exercises within a breathing-focused meditation module. In contrast to the experience-based mode, these spaces intentionally have no NPCs or interactive social elements, with the intention of providing a quiet environment in which users can focus on breathing.
Figure 10 illustrates the structure of the user and the breathing-focused meditation space within the metaverse meditation system. In this module, the user is immersed in a serene virtual environment, specifically designed to facilitate focused respiratory mindfulness. Meditation practice involves real-time synchronization of a user’s breathing pattern with visual and auditory feedback mechanisms, effectively reinforcing breath awareness and stability. The systematic feedback loop provided by the system enables users to continuously monitor and adjust their respiratory rhythms, enhance concentration, and promote deep relaxation. This interactive process underscores the effectiveness of breath-focused mindfulness training in immersive digital environments.
Following the initial setup steps, which include a body scan and seat selection, users engage in the breathing task through a “start” button on the screen, initiating synchronized breath input. The core of this synchronization is a purpose-built interface that delivers real-time feedback aligned with the user’s breathing rhythm. Pressing the button represents inhalation, while releasing it signifies exhalation. This direct interaction enables the system to continuously monitor the breathing process real-time. The breathing exercise begins with a guided phase, during which animated visuals of lung motion assist users in associating their bodily breathing patterns with the interface-based control. This stage also facilitates the establishment of a personalized baseline for the average breath duration. In the subsequent main breathing phase, a dynamic graph visualizes each breath relative to the calibrated baseline, making subtle fluctuations more perceptible. To support sustained attention, the module includes a simple attention-check mechanism that requires periodic user inputs during a task and serves as an indicator of engagement. Upon completion of the main breathing phase, the system presents quantitative metrics that summarized the breathing stability and attentional concentration of the user. By integrating a distraction-minimized immersive setting, interactive synchronization input, real-time visual feedback, and performance evaluation, the module directly enhances the users’ focus on their breathing and supports the development of consistent, regulated respiratory patterns. The architecture of this module is shown in Figure 11.
The breathing-focused meditation module offers five unique virtual environments—beach, fire camp, meeting room, forest, and bedroom—each designed to support focused respiratory practice, as illustrated in Figure 12.
  • Beach: The beach space is meticulously designed to evoke tranquility through the calming visuals and sounds of the ocean. Drawing inspiration from Korean coastal landscapes, including beaches and islands, it features realistically rendered elements, such as sand, shorelines, mountains, and seaweed. The environment dynamically shifts between a clear morning setting, offering cool shade under palm trees beneath a bright sky, and a serene afternoon ambiance characterized by the warm glow of sunset.
  • Forest: This space immerses the user in a tranquil natural environment modeled after various forest trails. It features a path flanked by a lush arrangement of trees and flowers, creating a sense of peaceful enclosure. To enhance realism and sensory immersion, forest spaces feature gently swaying plants, falling leaves, and birds, creating a vivid, layered environment that fosters relaxation and focused breathing through a serene connection with nature.
  • Fire Camp: The fire camp space recreates the relaxing experience of watching fire under a starry night sky. It has two areas with different feelings: a natural camp with log chairs to feel close to nature and a modern camp with an RV, camping chairs, and a grill to create a brighter and more comfortable camping experience.
  • Meeting Room: Inspired by real-world counseling rooms, this space provides a quiet, formal, and ordered environment conducive to focused meditation or reflective practice. Designed with large windows offering outdoor views, it incorporates a sense of openness while maintaining a structured setting. The aesthetics are intentionally professional and minimalist to minimize distractions.
  • Bedroom: Modeled closely after the intimate environment of an actual bedroom, it includes common personal items such as computers, beds, and dolls. The deliberate inclusion of familiar objects aims to evoke feelings of comfort, safety, and personal sanctuary. By leveraging the user’s potential association of their bedroom with rest and privacy, this space facilitates a sense of ease, making it easier to relax and concentrate on the meditative process of observing their breath.

3.2.1. Body Scan

Prior to engaging in the core breathing exercises, the users are guided through preparatory body scan practice. This helps users shift their attention from external distractions or racing thoughts toward their internal physical experience, thereby preparing the mind and body for the more focused attention required in subsequent breathing meditation practice.
In the developed system, this is facilitated by guided audio narration, which directs the user’s focus sequentially to various body parts, encouraging them to notice any sensation present, such as tension, warmth, or tingling, without judgment. Visual guidance, which is an NPC model that demonstrates this process, can also be incorporated. The primary aim of a body scan is to cultivate somatic awareness and enhance the user’s connection with their physical body and the sensations within it, as depicted in Figure 13.

3.2.2. Breathing Guide

A specialized system provides real-time feedback that is meticulously synchronized with the user’s breathing pattern to enhance focus and facilitate respiratory control. User input is captured via a dedicated on-screen “push” button, where pressing and holding the button corresponds to inhalation, and releasing it signifies exhalation. This direct physical interaction anchors the user’s attention to the breathing cycle.
The breathing exercise commences with a breathing-guided phase comprising the first 15 breath cycles. During this preparatory period, a visual aid in the form of an animated representation of the lungs is displayed at the bottom-left corner of the user interface. As the user interacts with the “push” button for inhalation and exhalation, the virtual lungs visually expand and contract, providing immediate feedback that connects the physical act of breathing to the interface control. Concurrently, the system records the duration of each breath cycle during the initial 15 breaths to calculate the user’s personalized average breath duration ( T a v g ). This calculated value establishes a crucial baseline for the subsequent main exercise. Following the calibration phase, the lung visual aid is removed to encourage users to shift their focus from external visuals to the internal sensations of their breath.

3.2.3. Breathing Synchronization

The breathing synchronization phase proceeds for 50 breath cycles, building upon the established baseline. The average breath duration calculated from the calibration phase is visualized as a horizontal line on a graph located in the upper-right corner of the user interface. For each subsequent breath during the main phase, the system plots the duration as a data point on a graph. To provide intuitive, at-a-glance feedback on respiratory stability relative to baseline, the plotted data points are color-coded. Breaths significantly slower than the baseline ( T i   > 2.0 ×   T a v g ) are represented in blue, breaths significantly faster ( T i   < 0.5 ×   T a v g ) are depicted in red, and breaths falling within a stable range (0.5 ×   T a v g       T i   ≤ 2.0 ×   T a v g ) are shown in green, as depicted in Figure 14. This visual mechanism allows users to perceive variations in their breathing patterns instantaneously, enabling conscious adjustments toward greater respiratory consistency throughout the session.
To gauge the user’s level of sustained focus during the 50-breath exercise, a simple attention-check mechanism is also implemented. Every 10 breaths, the user is required to press a separate “10 touch” button located on the right side of the interface, serving as a behavioral measure of task engagement. The stepwise progression and visual layout of the breathing-focused meditation module are illustrated in Figure 15.
Following the completion of the main breathing phase (50-breath), the system provides users with quantitative scores that evaluate two key aspects of their meditative performance: breathing stability and concentration. As shown in Figure 16, the resulting scores are prominently displayed to the user.
The breathing stability score quantifies the consistency of the user’s breath duration throughout the 50 main breath cycles relative to the established baseline average ( T a v g ). It is calculated by determining the average absolute deviation of each main breath duration from the baseline average, normalizing this average deviation relative to the baseline duration itself, and subtracting the normalized value from 1, which is typically expressed as a percentage.
S s t a b i l i t y = ( 1 1 N m a i n i = 1 N m a i n | M i T a v g | T a v g ) × 100 %
The concentration score assesses the user’s ability to maintain focused attention on the task, operationally defined by accuracy in acknowledging every tenth breath cycle. The users are instructed to press the designated “10 touch” button upon completion of the 10th, 20th, 30th, 40th, and 50th breath cycles. The score is calculated based on the average absolute deviation between the actual number of breaths at which the button is pressed and the target number of breaths (multiples of 10), normalized by the interval length (10) to provide a measure of attentional adherence.
S c o n c e n t r a t i o n = ( 1 1 N i n t e r v a l s × I n t e r v a l l e n g t h j = 1 N i n t e r v a l s | P r e s s j T a r g e t j | ) × 100 %

4. Experiments

4.1. Experimental Design

This study used a quasi-experimental, single-group pre-test/post-test design to explore the preliminary effects and feasibility of a metaverse-based meditation intervention system. The primary objective was to conduct an exploratory evaluation of the system’s potential impact on participants’ psychological well-being and mindfulness skills. Consistent with prior exploratory studies, some of which have been pilot feasibility trials with fewer than 50 participants and limited intervention sessions, single-group pre–post designs have been reported in the literature and may provide preliminary insights into feasibility, acceptability, and efficacy before large-scale trials [72,73,74].
In this study, participants were asked to use the meditation system 8 times (4 times a week) over a 2-week period, each time for approximately 15 min. Baseline psychological measures (pre-test) were conducted using selected questionnaires before the participants started the first intervention session. At the end of the two-week intervention period, that is, after the eighth and final sessions, the participants completed the same set of questionnaires again (post-test) to assess changes from their initial baseline levels.
Thirty-one (n = 31) participants were recruited for this study using the random recruitment method. Eligibility was determined based on prespecified inclusion and exclusion criteria. All participants provided informed consent prior to participation. Participants confirmed their consent by checking a required consent box on the online survey form, indicating that they had read and understood the study objectives, procedures, potential risks, and benefits, and were aware of their right to withdraw from the study at any time.

4.2. Evaluation Metrics

This section outlines the measures and instruments used to evaluate the psychological impact and user experience of the metaverse meditation interventions. A comprehensive suite of psychometric scales and custom assessments was administered to capture changes in the participants’ mental states and system suitability.

4.2.1. Questionnaire Survey

To comprehensively evaluate the multifaceted impact of the metaverse-based mindfulness meditation intervention, a suite of validated standardized psychometric instruments, along with one custom-developed measure, was administered both before and after the intervention. These instruments were carefully selected to capture changes across key domains relevant to the effectiveness of the intervention, including psychological distress, mindfulness-related capacities, self-perception, and the overall user experience with the system.
Psychological distress was assessed using several widely recognized scales. Depressive symptoms were measured using the Patient Health Questionnaire-9 (PHQ-9) [75] and the Beck Depression Inventory (BDI) [76]. Anxiety symptoms were evaluated using the Generalized Anxiety Disorder-7 (GAD-7) [77] and the Beck Anxiety Inventory (BAI) [78]. Perceived stress levels over the past month were assessed using the Perceived Stress Scale (PSS) [79]. For all distress measures, lower post-intervention scores were expected to indicate improvements in mental well-being.
Mindfulness was assessed multidimensionally using the Five-Facet Mindfulness Questionnaire (FFMQ-15) [80], which evaluates five core components: observing, describing, acting with awareness, nonjudging, and nonreactivity. Psychological flexibility, conceptualized as the ability to remain present and open to experiences, was measured using the Acceptance and Action Questionnaire II (AAQ-II) [81]. Self-compassion was assessed using the Self-Compassion Scale—Short Form (SCS-SF) [82], and self-esteem was evaluated using the Rosenberg Self-Esteem Scale (RSES) [83]. For the FFMQ-15, SCS-SF, and RSES, higher scores represent improvement, whereas for the AAQ-II, lower scores indicate a favorable outcome.
In addition to standardized clinical and psychological assessments, a project-specific instrument titled the Metaverse Contents Suitability Assessment (MCSA) was developed to evaluate the overall usability and appropriateness of the intervention. Unlike validated psychometric tools, the MCSA was designed to capture direct user feedback on satisfaction with the system design, usability, and technical performance.
All validated instruments were administered in Korean. Scoring followed established standard procedures described in the literature. All questionnaires were scored identically in Korean and English, except for the RSES. The Korean version of the RSES [84] is administered on a 5-point scale. In the current experiment, Cronbach’s alpha for the RSES was 0.798.
Appendix B present the full set of survey instruments used, including item-level questions and response formats for each questionnaire. The scoring procedures for all administered questionnaires are detailed in Table 2.

4.2.2. Data Analysis Strategy

Quantitative data derived from the pre- and post-intervention questionnaire assessments were statistically analyzed using IBM SPSS Statistics (version 29.0.2.0). The analysis was structured to describe the characteristics of the sample and data distributions, evaluate psychometric changes across standardized psychological measures, and explore preliminary hypotheses regarding the potential effects of the metaverse-based mindfulness meditation intervention.
To evaluate pre- and post-intervention changes, the normality of the difference scores was first assessed using the Shapiro–Wilk test [85] and the Kolmogorov–Smirnov test [86]. In cases where both tests indicated non-significance (p > 0.05), the variable was considered to follow a normal distribution, and a parametric paired-sample t-test was applied [87]. When either test indicated a violation of normality (p < 0.05), the non-parametric Wilcoxon signed-rank test was used [88]. In cases where the two tests yielded conflicting results, the decision was based on the outcome of the Shapiro–Wilk test, given its higher statistical power for small to moderate sample sizes [89].
Correspondingly, two types of effect-size measures have been reported. For variables analyzed using t-tests, Cohen’s d was calculated to represent the standardized mean difference between the pre- and post-intervention scores. For variables analyzed with the Wilcoxon test, Cohen’s r was computed as an indicator of non-parametric effect size, derived from the test statistic (Z) divided by the square root of the number of observations ( N ). While both metrics quantify the effect magnitude, d reflects the standardized mean differences under parametric assumptions, and r reflects the rank-based correlation strength under non-parametric conditions. This approach enabled an exploratory estimation of potential intervention effects while accounting for the distributional properties of each outcome variable. All inferential analyses adopted a significance threshold of p < 0.05.
The results were presented using narrative interpretations supported by structured summary tables. All findings were directly reported in a tabular format to ensure clarity and conciseness in communicating pre–post and intervention changes.

4.3. Experimental Results

Of the 31 distributed questionnaires, 31 were fully completed and valid for analysis. Among the participants, 9 (29.0%) were male, and 22 (71.0%) were female. The participants’ ages ranged from 18 to 49 years, with the majority (61.3%) falling within the 20–29 age group. The remaining participants were distributed among other age groups, as follows: one participant (3.2%) aged between 18 and 19 years, five participants (16.1%) between 30 and 39 years, and six participants (19.4%) between 40 and 49 years. The detailed demographic characteristics of the sample are summarized in Table 3.
To determine the appropriate statistical test for evaluating pre- post intervention changes, the normality of the difference scores was assessed using both the Kolmogorov–Smirnov and Shapiro–Wilk tests. Table 4 presents the results of the normality tests for all the outcome variables.
For most variables, both tests yielded nonsignificant results (p > 0.05), indicating that the assumption of normality was met. These variables include the BDI, PHQ-9, BAI, PSS, the FFMQ-15 aggregate score, most of its subscales (observing, acting awareness, nonjudging, and nonreactivity), and the AAQ-II. Accordingly, paired-sample t-tests were deemed appropriate for these variables.
However, violations of the normality assumption were observed for several variables. Specifically, the Shapiro–Wilk test revealed statistically significant deviations from normality in the pre- and post-change scores for the GAD-7 (p = 0.042), describing subscales of FFMQ-15 (p = 0.042), SCS-SF (p < 0.001), and RSES (p = 0.004). The Wilcoxon signed-rank test was used in these cases. When the two normality tests provided conflicting outcomes, the decision was based on the Shapiro–Wilk test, consistent with standard recommendations for small sample sizes (n < 50). These results guided the choice of subsequent inferential analyses, ensuring that the statistical methods employed confirmed the distributional characteristics of each variable, and thus upheld the validity of the findings.

4.3.1. Measures of Psychological Distress

The mean scores and statistical comparisons for psychological distress, including measures of depression (BDI, PHQ-9), anxiety (BAI, GAD-7), and stress (PSS) before and after the intervention are presented in Table 5.
For depression, participants showed statistically significant reductions after the intervention. Specifically, Beck Depression Inventory (BDI) scores significantly decreased from pre-test (M = 13.10, SD = 7.83) to post-test (M = 7.94, SD = 5.78), t = 4.462, p < 0.001, indicating a substantial reduction with a large effect size (d = 0.801). Similarly, the Patient Health Questionnaire-9 (PHQ-9) demonstrated a significant reduction in depressive symptoms (pre: M = 7.48, SD = 5.21; post: M = 3.42, SD = 3.17), t = 6.804, p < 0.001, with a very large effect size (d = 1.222), supporting the strength of these preliminary findings. Regarding anxiety measures, both the Beck Anxiety Inventory (BAI) and Generalized Anxiety Disorder-7 (GAD-7) revealed significant decreases in anxiety levels post-intervention. The BAI scores decreased notably (pre: M = 9.52, SD = 7.27; post: M = 6.10, SD = 5.86), t = 4.217, p < 0.001, with a moderate-to-large effect size (d = 0.757). The GAD-7 also showed significant improvement in anxiety symptoms through the Wilcoxon signed-rank test (Z = −4.217, p < 0.001), with a medium-to-large effect size (r = 0.757), indicating that the intervention likely contributed to reducing anxiety symptoms among participants. Regarding perceived stress, the Perceived Stress Scale (PSS) scores demonstrated a statistically significant decrease (pre: M = 18.84, SD = 3.91; post: M = 16.39, SD = 3.99), t = 3.132, p = 0.004, suggesting stress reduction following the intervention. The associated moderate effect size (d = 0.563) further substantiates the clinical relevance of the observed changes.
In conclusion, these findings suggest that the metaverse-based meditation intervention was associated with reduced psychological distress across all measured domains (depression, anxiety, and stress), with statistically significant changes and moderate-to-large effect sizes. These results highlight the potential of immersive meditation systems as an effective complementary strategy for mental health management.

4.3.2. Reflecting Mindfulness

Tests of normality using the Shapiro–Wilk and Kolmogorov–Smirnov procedures indicated that all FFMQ-15 subscales met the assumption of normality (p > 0.05), except for the describing subscale. Accordingly, paired-sample t-tests were employed for the aggregate score and the observing, acting with awareness, nonjudging, and nonreactivity subscales. Given the non-normal distribution of the describing subscale, the non-parametric Wilcoxon signed-rank test was used.
Based on our a priori directional hypothesis that participants’ mindfulness levels would increase following the intervention, one-tailed tests were conducted for all subscales and aggregate scores. As shown in Table 6, the FFMQ-15 aggregate score increased significantly from pre-intervention (M = 3.09, SD = 0.59) to post-intervention (M = 3.35, SD = 0.57), t = −3.15, p = 0.002 (one-tailed), with a moderate effect size (d = 0.57). Consistent with the hypothesis, statistically significant gains were also observed in acting with awareness (t = −1.85, one-tailed p = 0.037, d = 0.33) and nonreactivity (t = −1.77, one-tailed p = 0.043, d = 0.32), both showing small-to-moderate effect sizes. The Observing subscale did not reach conventional statistical significance but exhibited a borderline effect (t = −1.68, one-tailed p = 0.051), with a small to moderate effect size (d = 0.30), indicating a possible trend toward change. The describing subscale, analyzed using the Wilcoxon signed-rank test, showed a non-significant increase (Z = −1.88, two-tailed p = 0.060, r = 0.34), although 58% of the participants exhibited higher post-intervention scores, suggesting a potential upward tendency.
Overall, one-tailed tests were applied in line with the study’s directional hypothesis regarding mindfulness enhancement, while a two-tailed test was used for the description subscale because of the lack of parametric assumptions. In all analyses, statistical significance was determined at a threshold of p < 0.05.
As shown in Table 7, an exploratory factor analysis was conducted to examine how participants conceptualized the FFMQ-15 subscales. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.598, and Bartlett’s test of sphericity was significant, χ2(105) = 184.80, p < 0.001, confirming that the data were suitable for factor analysis. Five factors were extracted, accounting for 72.65% of the total variance. These factors were broadly consistent with the intended structure of the instrument.
Based on the rotated component matrix, the item assignment for each factor followed the procedure described herein. First, the top three items with loadings of |0.40| or higher were provisionally allocated to each factor. When an item exhibited high cross-loading across multiple factors, the final assignment was determined by considering both the theoretical structure of the FFMQ-15 and its highest numerical loading value. During this process, the item Describing3 did not rank among the top three loadings for any factor. Given the theoretical requirement that a factor should be represented by at least three items, assigning it to the closest factor, Factor 5, was deemed inappropriate. Although there was theoretical consistency with Factor 4, which contained only two items, the loading of Describing3 on Factor 4 was –0.184, falling below the threshold of |0.40|. Therefore, Describing3 was ultimately excluded from all the factor assignments.
The final composition of the items for each factor is presented in Table 8. The rotated factor solution for FFMQ-15 yielded five factors, though the structure partially diverged from the original five-facet model. Factor 1 was defined by high loadings of Nonreactivity3 (0.838), Nonreactivity1 (0.789), and Observing2 (0.650). Factor 2 comprised Describing1 (0.931), Describing2 (0.889), and Observing3 (0.583). Factor 3 grouped all Acting Awareness items (Acting Awareness2: 0.886; Acting Awareness3: 0.819; Acting Awareness1: 0.529). By contrast, Factor 4 included Nonjudging3 (0.658) and Observing1 (–0.842), remaining as weakly loading cases by default. Factor 5 consisted of Nonjudging1 (0.814), Nonjudging2 (0.716) and Nonreactivity2 (−0.707).
Although the intervention was associated with a moderate increase in overall mindfulness (FFMQ-15 aggregate), the effects on individual subscales were more selective, with the strongest gains in acting with awareness and nonreactivity. The exploratory factor analysis did not fully reproduce the theoretical five-factor structure of the FFMQ-15. Given the small sample size and short intervention period, the structural stability of the factors may be limited. While some factors showed partial consistency with the theoretical model, these findings should be considered preliminary and exploratory. Future studies with larger samples and confirmatory factor analysis are needed to validate the factor structure of the FFMQ-15.

4.3.3. Psychological Flexibility

Table 9 presents the outcomes related to participants’ psychological flexibility as measured by the AAQ-II. Based on an a priori hypothesis that the intervention would increase psychological flexibility as reflected by a reduction in AAQ-II scores, a one-tailed paired-sample t-test was conducted.
The results revealed a statistically significant improvement from pre- to post-intervention (t = 2.024, p = 0.026, one-tailed). Specifically, AAQ-II scores significantly decreased from a pre-intervention mean of 24.81 (SD = 10.97) to a post-intervention mean of 21.90 (SD = 9.62), suggesting a potential increase in participants’ psychological flexibility.
However, the observed effect size (d = 0.36) was relatively modest, indicating a small-to-moderate practical impact. This moderating effect might be attributable to the brief two-week duration of the intervention, suggesting that a longer intervention period may potentially yield more consistent or sustained improvements in psychological flexibility.

4.3.4. Self-Compassion and Self-Esteem

Table 10 presents the results of the self-compassion and self-esteem assessments measured using the SCS-SF and RSES, respectively.
Regarding self-compassion, there was a statistically significant increase from pre-intervention (M = 2.78, SD = 0.76) to post-intervention (M = 3.18, SD = 0.68), as indicated by the Wilcoxon signed-rank test (Z = −3.380, p = 0.001). This improvement was associated with a large effect size (r = 0.607). This suggests that participants showed a notable increase in self-compassion following the mindfulness intervention, although the exploratory nature of the study requires cautious interpretation. Similarly, for self-esteem, participants showed significant improvement, with scores increasing from pre-intervention (M = 33.00, SD = 8.91) to post-intervention (M = 36.06, SD = 7.84) according to the Wilcoxon signed-rank test (Z = −2.975, p = 0.003). This improvement was associated with a large effect size (r = 0.534).
In conclusion, these findings suggest that the developed metaverse-based mindfulness intervention not only positively impacts general psychological flexibility and mindfulness facets but also significantly strengthens participants’ self-compassion and self-esteem, highlighting its potential as a promising exploratory digital approach to supporting psychological well-being.

4.3.5. Participants’ Subjective Ratings

Finally, participants’ subjective evaluations of intervention effectiveness were captured through the Metaverse Contemplative Space Assessment (MCSA) developed for this study.
The clarity of instructional explanations provided for content usage received predominantly favorable evaluations, with 77.4% of participants expressing satisfaction or high satisfaction, highlighting the system’s capability to clearly convey operational instructions. Similarly, menu selection and comprehension were evaluated positively, with approximately 71.0% of the participants indicating satisfaction or high satisfaction. This suggests that the user interface is intuitive, thereby enhancing user navigation within meditation applications.
In terms of the spatial design for breathing-focused meditation, satisfaction levels were notably high; 80.6% of the participants indicated positive evaluations (either satisfied or very satisfied). Similarly, breathing meditation control methods were positively received by approximately 64.5% of the participants. These results support the efficacy of the dedicated breathing-focused meditation module, particularly in providing an engaging and easily operable environment to facilitate breathing practice. Participants’ evaluations of the experience-based mindfulness space design showed even higher satisfaction, with approximately 80.6% expressing positive responses. This finding suggests that the immersive and realistic design of virtual environments is highly effective in promoting user engagement, thereby potentially enhancing the quality of mindfulness practice.
Despite these generally positive assessments, participants identified areas for improvement, particularly in terms of interface elements such as font size, emoticon size, and text layout for the reflective writing component, with only approximately 32.3% of participants indicating satisfaction or high satisfaction and approximately 42.0% showing dissatisfaction or strong dissatisfaction. This highlights the necessity of further optimizing the interface readability and user friendliness for reflective emotion awareness. Similarly, participants’ evaluations of narration elements (including font size, speed, and audio tone) revealed mixed satisfaction with approximately 77.4% indicating satisfaction or neutrality. This underscores the importance of carefully refining these narrative elements to enhance overall user experience and comfort.

5. Conclusions and Future Work

This study conducted an exploratory evaluation of the feasibility and preliminary impact of the developed immersive metaverse-based mindfulness meditation system using multiple standardized psychological measures and subjective user evaluations. Participants exhibited significant improvements across various psychological domains following the intervention.
Specifically, the participants showed significant reductions in depression scores (BDI and PHQ-9) with large effect sizes, indicating potential clinical significance, although the findings should be interpreted with caution. Similarly, anxiety symptoms measured using the BAI and GAD-7 scales demonstrated statistically significant decreases accompanied by moderate-to-large effect sizes. Perceived stress (PSS) also showed statistically significant reductions, offering preliminary support for the intervention’s potential effectiveness in alleviating psychological distress. Moreover, enhancements in psychological flexibility, as assessed via the AAQ-II, were statistically significant, suggesting that participants became more adaptive to coping with internal psychological experiences after engaging in the meditation system. Self-compassion (SCS-SF) and self-esteem (RSES) scores increased significantly, with relatively large effect sizes. Collectively, these results suggest that mindfulness meditation interventions foster greater emotional self-awareness, self-acceptance, and overall psychological well-being. Mindfulness-related capacity, measured using the FFMQ-15, showed mixed outcomes. While overall mindfulness significantly improved, the individual facets demonstrated varying effectiveness. Positive trends emerged notably in the facets of observing, acting awareness, and nonreactivity; however, improvements in the describing and nonjudging facets were statistically marginal. The shorter intervention duration and the complexity of specific mindfulness skills may account for these mixed results, warranting further exploration and refinement in subsequent research. The subjective usability evaluations revealed the participants’ overall positive acceptance of the system. Nevertheless, user feedback emphasized clear areas for improvement in interface readability, including font size, text layout, and audio narration settings. Addressing these usability factors is recommended to optimize user comfort and engagement in future developments.
In conclusion, this exploratory study suggested that the developed metaverse-based mindfulness meditation system may contribute to improvements in several psychological outcomes, including depression, anxiety, stress, psychological flexibility, self-compassion, and self-esteem. These findings have important implications for digital mental health interventions. By presenting preliminary evidence of the feasibility and potential benefits of a metaverse-based system, this study highlights the promise of leveraging immersive virtual environments to deliver structured psychological support. This approach appears promising for cultivating both foundational mindfulness skills and their practical application in contexts that more closely resemble daily life, thereby potentially enhancing the impact and applicability of digital interventions. A positive user experience further indicates that users are receptive to engaging with complex virtual platforms for mental health purposes, suggesting the potential for sustained use.
Despite these promising findings, some limitations inherent to this study design must be acknowledged. The quasi-experimental, single-group, pre-test/post-test design, while informative, does not include a control group. Consequently, the observed changes cannot be definitively attributed solely to the intervention, and the potential influence of extraneous factors, such as spontaneous improvement or participant expectations, cannot be entirely ruled out. An intervention duration of two weeks, while demonstrating short-term effects, does not provide insights into the maintenance of benefits over longer periods. Future research should move beyond quasi-experimental approaches toward true experimental designs to strengthen causal inferences. Randomized controlled trials (RCTs) that compare metaverse-based interventions with active control groups, waitlist controls, or standard care will be essential for rigorous validation. In addition, longitudinal follow-up assessments conducted several months post-intervention will be necessary to examine the durability and sustainability of outcomes. By adopting more systematic and controlled methodologies, future studies can establish a stronger evidence base and clarify both the psychological efficacy and the practical applicability of immersive metaverse-based mindfulness interventions.
A key direction for future research will be the development of adaptive, closed-loop generation mechanisms. We aim to further refine and personalize the feedback by dynamically incorporating both the user’s emotional state and their free-text feelings. Beyond prompt engineering alone, we also plan to integrate advanced methods to further enhance the emotional intelligence, coherence, and therapeutic relevance of LLM-generated outputs in digital mental health applications. Implementing and empirically testing such adaptive feedback systems will be crucial for realizing the full potential of metaverse-based digital mental health interventions.

Author Contributions

Conceptualization, A.Y. and K.C.; methodology, A.Y. and G.L.; software, A.Y., G.L., Y.L., M.Z. and S.J.; validation, A.Y. and G.L.; formal analysis, A.Y. and G.L.; investigation, A.Y. and G.L.; data curation, A.Y. and G.L.; writing—original draft preparation, A.Y.; writing—review and editing, J.P., J.R. and K.C.; visualization, A.Y. and G.L.; supervision, J.R. and K.C.; project administration, K.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a grant of the R&D project, funded by the National Center for Mental Health (grant number: MHER23B01).

Institutional Review Board Statement

This study was reviewed and approved by the DGU IRB (Dongguk University Gyeongju Institutional Review Board), under approval number DUIRB2025-03-15, on 26 March 2025. All procedures were conducted in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to express our sincere gratitude to the Industrial Artificial Intelligence Researcher Center (Dongguk University) for their invaluable support in coordinating participant recruitment and conducting the experimental data collection for this study. Their technical and administrative assistance was essential to the successful completion of the empirical evaluation process.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLMsLarge Language Models
GPTGenerative Pre-trained Transformer
PSSPerceived Stress Scale
PHQ-9Patient Health Questionnaire-9
GAD-7Generalized Anxiety Disorder 7-item
BDIBeck Depression Inventory
BAIBeck Anxiety Inventory
FFMQ-15Five Facet Mindfulness Questionnaire
AAQ-IIAcceptance and Action Questionnaire-II
SCS-SFSelf-Compassion Scale—Short Form
RSESRosenberg Self-Esteem Scale
MCSAMetaverse Content Suitability Assessment
AIArtificial Intelligence
TMMSThe Melody of the Mysterious Stones
NPCsNon-Player Characters
SPSSInternational Business Machines Statistical Package for the Social Sciences Statistics
GUIGraphical User Interface

Appendix A

Appendix A.1. Validation of Emotion Congruence for LLM-Generated Loving-Kindness Guidance

To validate whether the LLM-generated loving-kindness meditation guidance accurately reflects the user’s selected emotional state, we employed a standardized emotion classification framework based on state-of-the-art BERT-based models. For each user session, the user selected one of five target emotions—excitement, sadness, anxiety, anger, or distress—prior to entering the mirror space. GPT-4 generates a personalized guidance script based on the user’s input emotions and the process of loving-kindness meditation. In order to follow the process of loving-kindness meditation, our system prompts the LLM to generate guidance scripts that guide users from acknowledging and accepting their current emotions to ultimately cultivating kindness and compassion toward themselves and the world. This approach follows the standard progression of loving-kindness meditation, supporting users in moving from self-recognition to a broader sense of acceptance and goodwill.
The generated personalized guidance scripts were then analyzed using a pre-trained emotion classification model to automatically infer the dominant emotion conveyed by the text. Related emotion analysis frameworks have been developed in other multimedia domains. For example, Akbar et al. (2025) employed a BERT-based model for emotion content analysis in social media [90]. In natural language processing, the GoEmotions dataset has become a standard resource for training and benchmarking English-language emotion classification models [91], supporting the development of numerous state-of-the-art classifiers. For the present validation task, we adopted an open-source model roberta-base-go_emotions [92], from Hugging Face—which is a RoBERTa-based model fine-tuned on the GoEmotions dataset for multi-label emotion classification. Table A1 presents the results of this stage-wise validation.
Table A1. Emotional recognition analysis of LLM-generated guidance scripts.
Table A1. Emotional recognition analysis of LLM-generated guidance scripts.
User-Selected Emotion for Guidance Script Generation
No.AngerNo.AnxietyNo.DistressNo.ExcitementNo.Sadness
1Neutral
(0.72)
19Approval
(0.35)
37Caring
(0.70)
55Neutral
(0.74)
73Neutral
(0.29)
2Caring
(0.48)
20Nervousness
(0.35)
38Neutral
(0.33)
56Curiosity
(0.57)
74Sadness
(0.77)
3Approval
(0.61)
21Caring
(0.54)
39Caring
(0.61)
57Curiosity
(0.52)
75Sadness
(0.48)
4Curiosity
(0.68)
22Caring
(0.69)
40Neutral
(0.37)
58Curiosity
(0.66)
76Sadness
(0.65)
5Neutral
(0.79)
23Caring
(0.56)
41Curiosity
(0.60)
59Neutral
(0.61)
77Sadness
(0.89)
6Neutral
(0.91)
24Neutral
(0.63)
42Neutral
(0.44)
60Confusion
(0.55)
78Sadness
(0.86)
7Curiosity
(0.64)
25Curiosity
(0.57)
43Curiosity
(0.66)
61Curiosity
(0.57)
79Sadness
(0.82)
8Curiosity
(0.69)
26Curiosity
(0.63)
44Curiosity
(0.61)
62Curiosity
(0.73)
80Curiosity
(0.55)
9Curiosity
(0.70)
27Curiosity
(0.64)
45Neutral
(0.51)
63Joy
(0.39)
81Curiosity
(0.46)
10Neutral
(0.74)
28Caring
(0.61)
46Love
(0.43)
64Caring
(0.74)
82Caring
(0.71)
11Caring
(0.83)
29Caring
(0.70)
47Caring
(0.87)
65Excitement
(0.53)
83Sadness
(0.81)
12Caring
(0.65)
30Neutral
(0.52)
48Neutral
(0.79)
66Neutral
(0.65)
84Caring
(0.84)
13Neutral
(0.41)
31Caring
(0.51)
49Neutral
(0.58)
67Caring
(0.35)
85Sadness
(0.91)
14Remorse
(0.38)
32Caring
(0.32)
50Caring
(0.65)
68Caring
(0.76)
86Sadness
(0.89)
15Caring
(0.24)
33Caring
(0.32)
51Neutral
(0.75)
69Joy
(0.27)
87Sadness
(0.83)
16Caring
(0.81)
34Caring
(0.64)
52Caring
(0.64)
70Excitement
(0.39)
88Joy
(0.56)
17Desire
(0.72)
35Neutral
(0.65)
53Caring
(0.75)
71Caring
(0.57)
89Caring
(0.64)
18Caring
(0.81)
36Caring
(0.64)
54Caring
(0.71)
72Caring
(0.65)
90Sadness
(0.48)
The column headers represent the user-selected input emotions. The No. under each column denotes the index of the generated sample corresponding to that emotion. Each cell presents the top-1 predicted emotion label derived from the emotion classifier for the given sample, accompanied by the associated confidence score (e.g., anxiety (0.94)).
Additionally, an analysis of the results in Table A1 by frequency revealed that the most common emotions were caring (31 instances), neutral (19 instances), and curiosity (17 instances). These emotions closely align with the core affective goals of loving-kindness meditation—acceptance, calmness, and reflectiveness. In particular, the high frequency of caring demonstrates that the system empathetically acknowledges users’ emotions, while neutral suggests that guidance is maintained calmly without unnecessary emotional distortion. The prominence of curiosity reflects the system’s function in encouraging users to explore their experiences in a nonjudgmental manner. In contrast, negative emotions such as remorse, nervousness, and sadness appeared at relatively low frequencies, indicating that the generated guidance did not amplify users’ difficult emotions but instead tended to redirect them within a therapeutically safe context.
However, while these results may be interpreted as consistent with the core emotional qualities of loving-kindness meditation, it remains uncertain whether the LLM is faithfully reflecting users’ reported emotional states. Frequency counts alone cannot clearly distinguish whether the outputs reflect the context of user emotions or merely reproduce a generalized emotional tone. More sophisticated analyses are therefore required.
To address this, two additional analyses were conducted. First, BERTopic [93] was employed to examine whether the outputs generated by the LLM remained consistent regardless of emotional states. Second, WordCloud analysis was used to visually verify whether the generated texts accounted for users’ reported emotional states. These methods aimed to determine whether the LLM’s outputs not only encompassed emotions in a broad tonal sense but also provided tailored guidance through concrete linguistic reflection.
Table A2. Topic analysis of LLM-generated guidance scripts.
Table A2. Topic analysis of LLM-generated guidance scripts.
TopicCountNameRepresentationRepresentative Docs
178Feeling heart ask feelFeeling, heart, ask, feel, mind, gently, like, just, emotions, pass‘Ask yourself: “Who is the one observing this star of anxiety?” Close your eyes and gently watch yourself as you feel it.’,
‘Remember, this feeling will pass. Just as there are moments when things go well, there will also be times when they do not. Do not let your emotions rise and fall with circumstances. Instead, watch calmly as they pass.’,
‘Gently imagine this excitement dispersing like a soft breeze. Instead of being swept away by fleeting emotions, find the deeper, steadier center within your heart.’
212Emotion remember suffering risingEmotion, remember, suffering, rising, hold, fade, time, ask, need, remain, acknowledge‘Observe the anger in your heart as if you were looking at a stranger. Step back from the emotion and watch it pass, like clouds drifting across the sky.’,
‘Understand that the person or situation that made you angry is also suffering and capable of making mistakes. Remember that all emotions eventually fade with time, and anger may later leave only regret.’,
‘Quietly ask yourself where this anger truly comes from. Is it really the emotion I want to hold on to? Does it deserve to remain within me?’
Based on BERTopic analysis, the 90 pieces of LLM-generated data were classified into only two major topics:
  • Topic 1 (78 items): Characterized by key expressions such as “feeling, heart, ask, feel, mind, gently, like, just, emotions, pass”. The language primarily guided users toward directly experiencing and accepting their emotions through introspective reflection. Representative sentences emphasized impermanence and self-observation, such as “This emotion will eventually pass” and “Quietly close your eyes and watch deeply within your mind”.
  • Topic 2 (12 items): Centered around terms like “emotion, remember, suffering, rising, hold, fade, time, ask, need, remain, acknowledge”. The content often dealt with emotions like anger and suffering, encouraging users to understand the mistakes and struggles of others while reminding them that emotions ultimately fade with time. Representative messages included “All emotions eventually disappear with time” and “Ask yourself where the root of your anger lies”.
These results demonstrate that the 90 data samples converged into two consistent themes, suggesting that the system generates content with stability in both tone and structure. In other words, the LLM outputs are built around shared themes of emotional acceptance and impermanence, which align with the core principles of loving-kindness meditation.
At the same time, this analysis alone is insufficient to evaluate how directly the generated texts reflect the specific emotions chosen by users. To address this, a WordCloud analysis was performed on the 18 datasets generated for each emotion. This allowed for a visual verification of which words referenced and emphasized the users’ reported emotional states within the outputs. Such a complementary analysis goes beyond checking thematic consistency, providing an important measure of whether the LLM outputs linguistically reflect users’ emotions in a meaningful way.
This study used amueller’s word_cloud library for Python (version 3.9)-based word cloud visualization. While no official academic paper exists for this specific library, research on word cloud algorithms that preserve semantic structure can be found in Barth et al. (2013) and Schubert et al. (2017) [94,95].
Table A3. WordCloud analysis of LLM-generated guidance scripts.
Table A3. WordCloud analysis of LLM-generated guidance scripts.
LLM OutputWordCloud Analysis Results
AngerSystems 13 00798 i001
AnxietySystems 13 00798 i002
DistressSystems 13 00798 i003
ExcitementSystems 13 00798 i004
SadnessSystems 13 00798 i005
The results of the WordCloud analysis showed that in every visualization by emotion, the user-selected emotions (anger, anxiety, distress, excitement, sadness) consistently appeared as the most prominent keywords. This indicates that the LLM did not merely generate generic or neutral scripts; rather, it repeatedly referenced the specific emotional states reported by users, reflecting them as central themes within the generated text.

Appendix A.2. Adaptive Prompt Design for Emotion- and Reflection-Driven LLM-Based Loving-Kindness Meditation Guidance

To promote a more adaptive and emotionally attuned meditation experience, we designed a structured prompt for LLM-based loving-kindness meditation guidance that incorporates both the user’s selected emotion and their written reflections. The LLM is instructed to act as a compassionate meditation therapist and to generate guidance scripts that follow the established sequence of loving-kindness meditation: leading the user from recognition and acceptance of their current emotions, through gentle exploration and self-compassion, and ultimately toward extending kindness and goodwill to themselves and the broader world. The specific content is tailored to each emotion (e.g., anger, anxiety, sadness, distress, excitement) based on the standard loving-kindness meditation protocol. By integrating both emotion and feelings with an evidence-based meditation structure, these prompts are intended to enable the LLM to generate guidance narratives that are personalized, contextually relevant, and deeply supportive of the user’s emotional state.
Table A4. Adaptive LLM prompt structure for loving-kindness meditation.
Table A4. Adaptive LLM prompt structure for loving-kindness meditation.
Prompt
You are a compassionate therapist specializing in loving-kindness meditation and mindful emotional guidance.
Your client’s current emotional state is: {user_emotion}.
The client shared the following feelings: “{user_ feelings}”.
Based on this information, please guide the client through a short, step-by-step loving-kindness meditation session tailored to their emotion and feelings.
Follow these stages for the {user_emotion} emotion:
  • Acknowledge and validate the emotion.
  • Encourage mindful observation: Help the client recognize who is experiencing the feeling, and its qualities.
  • Facilitate gentle inquiry: Explore possible causes or contributing factors, inviting reflection on whether the client’s perceptions or context may influence the emotion.
  • Suggest gentle relief: Recommend light, practical steps for comfort, grounding, or temporary distraction.
  • Reinforce impermanence: Remind the client that all emotions are transient and may shift with time or conditions.
  • Connect to loving-kindness: Invite the client to respond with kindness, acceptance, or a reframing that supports emotional resilience or compassion for self and others.
Integrate the user’s written feelings where relevant to personalize your response.
Write 1–2 concise sentences for each stage, using a warm, gentle, and supportive tone.
Do not copy these instructions into your answer—just output the meditation script.
Table A5 presents a concrete example of the resulting guidance script.
Table A5. Examples of article outputs generated using the new prompt.
Table A5. Examples of article outputs generated using the new prompt.
InputOutput
User_EmotionUser_Feelings
AnxietyI felt nervous and kept worrying about making a mistake.
  • It’s completely understandable that you felt nervous and kept worrying about making a mistake—anxiety often arises when we care deeply about doing well. You are not alone in feeling this way, and your concern reflects your thoughtfulness.
  • Take a quiet breath and gently notice: “This is anxiety, and I am the one noticing it”. Where do you feel it in your body—perhaps a tight chest, fluttering stomach, or racing thoughts?
  • Ask yourself with kindness: “Was I expecting perfection? Was I afraid of being judged?” Reflect if something in your environment or your own inner critic might have heightened this sense of pressure.
  • Place your feet flat on the ground and let your hands rest on your thighs. Inhale deeply through the nose, then exhale slowly through the mouth. If you can, take a moment to look out the window or listen to calming music.
  • Remember that this wave of anxiety will pass, just like others have. Emotions are like clouds in the sky—they come, they stay for a while, and then they drift away.
  • Place your hand gently over your heart and say softly: “May I be calm. May I be kind to myself even when I feel anxious. May I trust that mistakes do not define me”. Let warmth and compassion begin to replace tension, even if only a little.
AngerMy parents always scold me. I hate my family.
  • It’s deeply painful to feel constantly scolded by the people who are supposed to care for you. Your anger and feelings reflect a longing to be seen, heard, and treated with respect—you are not wrong for feeling this way.
  • Take a breath and gently bring your attention to your body. Notice where this anger lives—maybe in your chest, shoulders, or jaw—and say to yourself, “Anger is here, and I am noticing it”.
  • Ask yourself softly: “What part of me feels hurt when they scold me?” Perhaps it’s a desire to be accepted as I am, or a feeling of injustice. Notice if the anger also holds sadness or loneliness beneath it.
  • Step away from the source of distress if possible—go to a quiet place, stretch your body, or splash cool water on your face. Try grounding yourself by placing one hand on your chest and the other on your belly, breathing slowly and evenly.
  • This pain and anger, as overwhelming as it feels now, is not permanent. Like a storm, it may pass, and with time, space can grow around it.
  • Place both hands over your heart and say gently: “May I be safe. May I feel peace within, even when I am hurt. May I learn to respond with strength and compassion, even when others cannot give me what I need”. Let this be the first small step toward healing—beginning with care for yourself.
In future development, objective evaluation of the emotional congruence and appropriateness of LLM-generated guidance could draw on recent emotion recognition frameworks such as nBERT [96], which demonstrated precise tracking and analysis of affective states in psychotherapy transcripts. Integrating such emotion-aware analytic tools would allow for rigorous validation and ongoing optimization of personalized, adaptive feedback in digital mental health interventions.

Appendix B

Ten Standardized Self-Report Questionnaires

This study employed ten standardized self-report questionnaires to assess a wide range of psychological constructs relevant to mental health and mindfulness. Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14 and Table A15 present the full set of survey instruments used, including item-level questions and response formats for each questionnaire (AAQ-II, BAI, BDI, FFMQ-15, GAD-7, MCSA, PHQ-9, PSS, RSES, and SCS-SF).
Table A6. AAQ-II (Acceptance and Action Questionnaire-II) questionnaire items.
Table A6. AAQ-II (Acceptance and Action Questionnaire-II) questionnaire items.
QuestionsNever TrueVery Seldom TrueSeldom TrueSometimes TrueFrequently TrueAlmost
Always True
Always True
My painful experiences and memories make it difficult for me to live a life that I would value.1234567
I’m afraid of my feelings.1234567
I worry about not being able to control my worries and feelings.1234567
My painful memories prevent me from having a fulfilling life.1234567
Emotions cause problems in my life.1234567
It seems like most people are handling their lives better than I am.1234567
Worries get in the way of my success.1234567
Table A7. BAI (Beck Anxiety Inventory) questionnaire items.
Table A7. BAI (Beck Anxiety Inventory) questionnaire items.
QuestionsNot at AllMildly, but it Didn’t Bother Me MuchModerately—
It Wasn’t Pleasant at Times
Severely—
It Bothered
Me a Lot
Numbness or tingling0123
Feeling hot0123
Wobbliness in legs0123
Unable to relax0123
Fear of the worst happening0123
Dizzy or lightheaded0123
Heart pounding/racing0123
Unsteady0123
Terrified or afraid0123
Nervous0123
Feeling of choking0123
Hands trembling0123
Shaky/unsteady0123
Fear of losing control0123
Difficulty in breathing0123
Fear of dying0123
Scared0123
Indigestion0123
Faint/lightheaded0123
Face flushed0123
Hot/cold sweats0123
Table A8. BDI (Beck Depression Inventory) questionnaire items.
Table A8. BDI (Beck Depression Inventory) questionnaire items.
ScoreAnswer
0 I do not feel sad.
1 I feel sad.
2 I am sad all the time and I can’t snap out of it.
3 I am so sad and unhappy that I can’t stand it.
0 I am not particularly discouraged about the future.
1 I feel discouraged about the future.
2 I feel I have nothing to look forward to.
3 I feel the future is hopeless and that things cannot improve.
0 I do not feel like a failure.
1 I feel I have failed more than the average person.
2 As I look back on my life, all I can see is a lot of failures.
3 I feel I am a complete failure as a person.
0 I get as much satisfaction out of things as I used to.
1 I don’t enjoy things the way I used to.
2 I don’t get real satisfaction out of anything anymore.
3 I am dissatisfied or bored with everything.
0 I don’t feel particularly guilty.
1 I feel guilty a good part of the time.
2 I feel quite guilty most of the time.
3 I feel guilty all of the time.
0 I don’t feel I am being punished.
1 I feel I may be punished.
2 I expect to be punished.
3 I feel I am being punished.
0 I don’t feel disappointed in myself.
1 I am disappointed in myself.
2 I am disgusted with myself.
3 I hate myself.
0 I don’t feel I am any worse than anybody else.
1 I am critical of myself for my weaknesses or mistakes.
2 I blame myself all the time for my faults.
3 I blame myself for everything bad that happens.
0 I don’t have any thoughts of killing myself.
1 I have thoughts of killing myself, but I would not carry them out.
2 I would like to kill myself.
3 I would kill myself if I had the chance.
0 I don’t cry any more than usual.
1 I cry more now than I used to.
2 I cry all the time now.
3 I used to be able to cry, but now I can’t cry even though I want to.
0 I am no more irritated by things than I ever was.
1 I am slightly more irritated now than usual.
2 I am quite annoyed or irritated a good deal of the time.
3 I feel irritated all the time.
0 I have not lost interest in other people.
1 I am less interested in other people than I used to be.
2 I have lost most of my interest in other people.
3 I have lost all of my interest in other people.
0 I make decisions about as well as I ever could.
1 I put off making decisions more than I used to.
2 I have greater difficulty in making decisions more than I used to.
3 I can’t make decisions at all anymore.
0 I don’t feel that I look any worse than I used to.
1 I am worried that I am looking old or unattractive.
2 I feel there are permanent changes in my appearance that make me look unattractive
3 I believe that I look ugly.
0 I can work about as well as before.
1 It takes an extra effort to get started doing something.
2 I have to push myself very hard to do anything.
3 I can’t do any work at all.
0 I can sleep as well as usual.
1 I don’t sleep as well as I used to.
2 I wake up 1–2 h earlier than usual and find it hard to get back to sleep.
3 I wake up several hours earlier than I used to and cannot get back to sleep.
0 I don’t get more tired than usual.
1 I get tired more easily than I used to.
2 I get tired from doing almost anything.
3 I am too tired to do anything.
0 My appetite is no worse than usual.
1 My appetite is not as good as it used to be.
2 My appetite is much worse now.
3 I have no appetite at all anymore.
0 I haven’t lost much weight, if any, lately.
1 I have lost more than five pounds.
2 I have lost more than ten pounds.
3 I have lost more than fifteen pounds.
0 I am no more worried about my health than usual.
1 I am worried about physical problems like aches, pains, upset stomach, or constipation.
2 I am very worried about physical problems and it’s hard to think of much else.
3 I am so worried about my physical problems that I cannot think of anything else.
0 I have not noticed any recent change in my interest in sex.
1 I am less interested in sex than I used to be.
2 I have almost no interest in sex.
3 I have lost interest in sex completely.
Table A9. FFMQ-15 (Five Facet Mindfulness Questionnaire-15) questionnaire items.
Table A9. FFMQ-15 (Five Facet Mindfulness Questionnaire-15) questionnaire items.
QuestionsNever or Very Rarely TrueRarely TrueSometimes TrueOften TrueVery Often or Always True
When I take a shower or a bath, I stay alert to the sensations of water on my body.12345
I’m good at finding words to describe my feelings.12345
I don’t pay attention to what I’m doing because I’m daydreaming, worrying, or otherwise distracted.54321
I believe some of my thoughts are abnormal or bad, and I shouldn’t think that way.54321
When I have distressing thoughts or images, I “step back” and am aware of the thought or image without getting taken over by it.12345
I notice how foods and drinks affect my thoughts, bodily sensations, and emotions.12345
I have trouble thinking of the right words to express how I feel about things.54321
I do jobs or tasks automatically without being aware of what I’m doing.54321
I think some of my emotions are bad or inappropriate, and I shouldn’t feel them54321
When I have distressing thoughts or images, I am able just to notice them without reacting.12345
I pay attention to sensations, such as the wind in my hair or sun on my face.12345
Even when I’m feeling terribly upset, I can find a way to put it into words.12345
I find myself doing things without paying attention.54321
I tell myself I shouldn’t be feeling the way I’m feeling.54321
When I have distressing thoughts or images, I just notice them and let them go.12345
Table A10. GAD-7 (Generalized Anxiety Disorder-7) questionnaire items.
Table A10. GAD-7 (Generalized Anxiety Disorder-7) questionnaire items.
QuestionsNot at AllSeveral DaysMore Than Half the DaysNearly Every Day
Feeling nervous, anxious, or on edge.0123
Not being able to stop or control worrying.0123
Worrying too much about different things.0123
Trouble relaxing.0123
Being so restless that it is hard to sit still.0123
Becoming easily annoyed or irritable.0123
Feeling afraid, as if something awful might happen.0123
Table A11. MCSA (Metaverse Content Suitability Assessment) questionnaire items.
Table A11. MCSA (Metaverse Content Suitability Assessment) questionnaire items.
Metaverse Content Suitability Assessment
To respond to the growing demand for mental health services, the Industrial Artificial Intelligence Researcher Center at Dongguk University is developing metaverse-based content specifically designed for the mental well-being of the MZ generation. This survey is being conducted to assess participants’ satisfaction with and the effectiveness of the metaverse content developed for mental health management among the MZ generation.
  • Participant Eligibility: Individuals born between 1981 and 2010 (MZ generation).
Completion of the first survey and consent form to confirm eligibility.
  • Study Procedure
(1)
Completion of psychological checklists (Phase 1 and Phase 2).
(2)
Participation in metaverse content via mobile or PC over a 2-week period.
(3)
Installation of a lifelogging application and completion of follow-up surveys during the 2 weeks.
  • Participation Period: Total duration: 2 weeks
  • Participation Compensation: A coffee gift voucher worth KRW 50,000 will be provided upon completion of the study.
  • Voluntary Participation
Participation in this study is entirely voluntary. While there is no physical risk involved, some individuals may experience mild discomfort or stress during the survey process. You may withdraw from the study at any time if you feel uncomfortable. Declining to participate will not result in any disadvantages other than not receiving the participation reward mentioned above.
  • Privacy and Data Protection
All responses will be used solely for research purposes. In accordance with Article 15 of the Enforcement Rules of the Bioethics and Safety Act, research-related data (IRB review results, written consent forms, personal information usage logs, and final reports) will be securely stored for three years after the study’s conclusion. After this retention period, all data will be permanently deleted.We sincerely ask for your honest and continued participation over the next two weeks to help us gather meaningful insights. If you have any questions about the study or encounter any issues during participation, please feel free to contact the research team.
GenderMaleFemale
Age Group10s20s30s40sOther
QuestionsVery
Dissatisfied
DissatisfiedNeutralSatisfiedVery Satisfied
Was the explanation of how to use the Breathing-focused Meditation Module content provided clearly and appropriately?
Was it easy to understand and navigate the menu options in the Breathing-focused Meditation Module content?
Are you satisfied with the design of the Breathing-focused Meditation Module space upon entry?
Are you satisfied with the breathing guide and controls in the Breathing-focused Meditation Module?
After completing the Experience-based Mindfulness Meditation Module, are you satisfied with the space design?
Are you satisfied with the font size, emoji size, and overall text/screen layout while writing in the reflective emotion awareness GUI?
Are you satisfied with the font size, playback speed, and voice tone of the guided narration provided throughout the Metaverse-Based Meditation System content?
Table A12. PHQ-9 (Patient Health Questionnaire-9) questionnaire items.
Table A12. PHQ-9 (Patient Health Questionnaire-9) questionnaire items.
QuestionsNot at AllSeveral DaysMore Than Half the DaysNearly Every Day
Little interest or pleasure in doing things.0123
Feeling down, depressed, or hopeless.0123
Trouble falling or staying asleep or sleeping too much.0123
Feeling tired or having little energy.0123
Poor appetite or overeating.0123
Feeling bad about yourself—or that you are a failure or have let yourself or your family down.0123
Trouble concentrating on things, such as reading the newspaper or watching television.0123
Moving or speaking so slowly that other people could have noticed, or the opposite—being so fidgety or restless that you have been moving around a lot more than usual.0123
Thoughts that you would be better off dead or of hurting yourself in some way.0123
Table A13. PSS (Perceived Stress Scale) questionnaire items.
Table A13. PSS (Perceived Stress Scale) questionnaire items.
QuestionsNeverAlmost NeverSometimesFairly OftenVery Often
In the last month, how often have you been upset because of something that happened unexpectedly?01234
In the last month, how often have you felt that you were unable to control the important things in your life?01234
In the last month, how often have you felt nervous and stressed?01234
In the last month, how often have you felt confident about your ability to handle your personal problems? (R)43210
In the last month, how often have you felt that things were going your way? (R)43210
In the last month, how often have you found that you could not cope with all the things that you had to do?01234
In the last month, how often have you been able to control irritations in your life? (R)43210
In the last month, how often have you felt that you were on top of things? (R)43210
In the last month, how often have you been angered because of things that happened that were outside of your control?01234
In the last month, how often have you felt difficulties were piling up so high that you could not overcome them?01234
Table A14. RSES (Rosenberg Self-Esteem Scale) questionnaire items.
Table A14. RSES (Rosenberg Self-Esteem Scale) questionnaire items.
QuestionsStrongly DisagreeDisagreeSometimesAgreeStrongly Agree
On the whole, I am satisfied with myself.12345
At times, I think I am no good at all.12345
I feel that I have a number of good qualities.54321
I am able to do things as well as most other people.12345
I feel I do not have much to be proud of.54321
I certainly feel useless at times.12345
I feel that I’m a person of worth, at least on an equal plane with others.12345
I wish I could have more respect for myself.54321
All in all, I am inclined to feel that I am a failure.54321
I take a positive attitude toward myself.54321
Table A15. SCS-SF (Self-Compassion Scale-Short Form) questionnaire items.
Table A15. SCS-SF (Self-Compassion Scale-Short Form) questionnaire items.
QuestionsNeverRarelySometimesOftenAlways
When I fail at something important to me, I become consumed by feelings of inadequacy. (R)54321
I try to be understanding and patient towards those aspects of my personality I don’t like.12345
When something painful happens, I try to take a balanced view of the situation.12345
When I’m feeling down, I tend to feel like most
other people are probably happier than I am. (R)
54321
I try to see my failings as part of the human condition.12345
When I’m going through a very hard time, I give myself the caring and tenderness I need.12345
When something upsets me, I try to keep my emotions in balance.12345
When I fail at something that’s important to me, I tend to feel alone in my failure. (R)54321
When I’m feeling down, I tend to obsess and fixate on everything that’s wrong. (R)54321
When I feel inadequate in some way, I try to remind myself that feelings of inadequacy are shared by most people.12345
I’m disapproving and judgmental about my own flaws and inadequacies. (R)54321
I’m intolerant and impatient towards those aspects of my personality I don’t like. (R)54321

References

  1. McGrath, J.J.; Al-Hamzawi, A.; Alonso, J.; Altwaijri, Y.; Andrade, L.H.; Bromet, E.J.; Bruffaerts, R.; Caldas de Almeida, J.M.; Chardoul, S.; Chiu, W.T.; et al. Age of onset and cumulative risk of mental disorders: A cross-national analysis of population surveys from 29 countries. Lancet Psychiatry 2023, 10, 668–681. [Google Scholar] [CrossRef] [PubMed]
  2. Jayasree, A.; Shanmuganathan, P.; Ramamurthy, P.; Alwar, M.C. Types of medication non-adherence & approaches to enhance medication adherence in mental health disorders: A narrative review. Indian. J. Psychol. Med. 2024, 46, 503–510. [Google Scholar]
  3. Bailey, R.K.; Clemens, K.M.; Portela, B.; Bowrey, H.; Pfeiffer, S.N.; Geonnoti, G.; Riley, A.; Sminchak, J.; Lakey Kevo, S.; Naranjo, R.R. Motivators and barriers to help-seeking and treatment adherence in major depressive disorder: A patient perspective. Psychiatry Res. Commun. 2024, 4, 100200. [Google Scholar] [CrossRef]
  4. Deng, M.; Zhai, S.; Ouyang, X.; Liu, Z.; Ross, B. Factors influencing medication adherence among patients with severe mental disorders from the perspective of mental health professionals. BMC Psychiatry 2022, 22, 22. [Google Scholar] [CrossRef] [PubMed]
  5. Creswell, J.D. Mindfulness interventions. Annu. Rev. Psychol. 2017, 68, 491–516. [Google Scholar] [CrossRef]
  6. Hoge, E.A.; Bui, E.; Marques, L.; Metcalf, C.A.; Morris, L.K.; Robinaugh, D.J.; Worthington, J.J.; Pollack, M.H.; Simon, N.M. Randomized controlled trial of mindfulness meditation for generalized anxiety disorder: Effects on anxiety and stress reactivity. J. Clin. Psychiatry 2013, 74, 16662. [Google Scholar] [CrossRef]
  7. Sharma, M.; Rush, S.E. Mindfulness-based stress reduction as a stress management intervention for healthy individuals: A systematic review. J. Evid.-Based Complement. Altern. Med. 2014, 19, 271–286. [Google Scholar] [CrossRef]
  8. Komariah, M.; Ibrahim, K.; Pahria, T.; Rahayuwati, L.; Somantri, I. Effect of mindfulness breathing meditation on depression, anxiety, and stress: A randomized controlled trial among university students. Healthcare 2022, 11, 26. [Google Scholar] [CrossRef]
  9. Bringmann, H.C.; Michalsen, A.; Jeitler, M.; Kessler, C.S.; Brinkhaus, B.; Brunnhuber, S.; Sedlmeier, P. Meditation—Based lifestyle modification in mild to moderate depression—A randomized controlled trial. Depress. Anxiety 2022, 39, 363–375. [Google Scholar] [CrossRef]
  10. Goldin, P.R.; Gross, J.J. Effects of mindfulness-based stress reduction (MBSR) on emotion regulation in social anxiety disorder. Emotion 2010, 10, 83. [Google Scholar] [CrossRef]
  11. Zhang, Q.; Wang, Z.; Wang, X.; Liu, L.; Zhang, J.; Zhou, R. The effects of different stages of mindfulness meditation training on emotion regulation. Front. Human. Neurosci. 2019, 13, 208. [Google Scholar] [CrossRef] [PubMed]
  12. Creswell, J.D.; Myers, H.F.; Cole, S.W.; Irwin, M.R. Mindfulness meditation training effects on CD4+ T lymphocytes in HIV-1 infected adults: A small randomized controlled trial. Brain Behav. Immun. 2009, 23, 184–188. [Google Scholar] [CrossRef] [PubMed]
  13. Nyklíček, I. Aspects of self-awareness in meditators and meditation-naïve participants: Self-report versus task performance. Mindfulness 2020, 11, 1028–1037. [Google Scholar] [CrossRef]
  14. Chems-Maarif, R.; Cavanagh, K.; Baer, R.; Gu, J.; Strauss, C. Defining Mindfulness: A Review of Existing Definitions and Suggested Refinements. Mindfulness 2025, 2025, 1–20. [Google Scholar] [CrossRef]
  15. Kabat-Zinn, J.; Hanh, T.N. Full Catastrophe Living: Using the Wisdom of Your Body and Mind to Face Stress, Pain, and Illness; Random House Publishing Group: New York, NY, USA, 2009. [Google Scholar]
  16. Salzberg, S.; Jon, K.-Z. Lovingkindness: The Revolutionary Art of Happiness; Shambhala Publications: Boulder, CO, USA, 2004. [Google Scholar]
  17. Fredrickson, B.L.; Cohn, M.A.; Coffey, K.A.; Pek, J.; Finkel, S.M. Open hearts build lives: Positive emotions, induced through loving-kindness meditation, build consequential personal resources. J. Personal. Soc. Psychol. 2008, 95, 1045. [Google Scholar] [CrossRef]
  18. Zeng, X.; Chiu, C.P.; Wang, R.; Oei, T.P.; Leung, F.Y. The effect of loving-kindness meditation on positive emotions: A meta-analytic review. Front. Psychol. 2015, 6, 1693. [Google Scholar] [CrossRef]
  19. Kabat-Zinn, J. Falling Awake: How to Practice Mindfulness in Everyday Life; Hachette: London, UK, 2018. [Google Scholar]
  20. Keng, S.L.; Smoski, M.J.; Robins, C.J. Effects of mindfulness on psychological health: A review of empirical studies. Clin. Psychol. Rev. 2011, 31, 1041–1056. [Google Scholar] [CrossRef]
  21. Marais, G.A.B.; Lantheaume, S.; Fiault, R.; Shankland, R. Mindfulness-based programs improve psychological flexibility, mental health, well-being, and time management in academics. Eur. J. Investig. Health Psychol. Educ. 2020, 10, 1035–1050. [Google Scholar] [CrossRef]
  22. Frank, J.L.; Jennings, P.A.; Greenberg, M.T. Validation of the mindfulness in teaching scale. Mindfulness 2016, 7, 155–163. [Google Scholar] [CrossRef]
  23. Molloy Elreda, L.; Jennings, P.A.; DeMauro, A.A.; Mischenko, P.P.; Brown, J.L. Protective effects of interpersonal mindfulness for teachers’ emotional supportiveness in the classroom. Mindfulness 2019, 10, 537–546. [Google Scholar] [CrossRef]
  24. Messina, I.; Calvo, V.; Masaro, C.; Ghedin, S.; Marogna, C. Interpersonal emotion regulation: From research to group therapy. Front. Psychol. 2021, 12, 636919. [Google Scholar] [CrossRef]
  25. Roy, A. Interpersonal emotion regulation and emotional intelligence: A review. Int. J. Res. Publ. Rev. 2023, 4, 623–627. [Google Scholar] [CrossRef]
  26. Ghosh, S.; Mitra, B.; De, P. Towards improving emotion self-report collection using self-reflection. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar]
  27. Kvamme, T.L.; Sandberg, K.; Silvanto, J. Mental Imagery as part of an ‘Inwardly Focused’ Cognitive Style. Neuropsychologia 2024, 5, 108988. [Google Scholar] [CrossRef]
  28. Guendelman, S.; Medeiros, S.; Rampes, H. Mindfulness and emotion regulation: Insights from neurobio-logical, psychological, and clinical studies. Front. Psychol. 2017, 8, 208068. [Google Scholar] [CrossRef] [PubMed]
  29. Salem, G.M.M.; Hashimi, W.; El-Ashry, A.M. Reflective mindfulness and emotional regulation training to enhance nursing students’ self-awareness, understanding, and regulation: A mixed method randomized controlled trial. BMC Nurs. 2025, 24, 478. [Google Scholar] [CrossRef] [PubMed]
  30. Bakker, D.; Rickard, N. Engagement in mobile phone app for self-monitoring of emotional wellbeing predicts changes in mental health: MoodPrism. J. Affect. Disord. 2018, 227, 432–442. [Google Scholar] [CrossRef] [PubMed]
  31. He, X.; Shi, W.; Han, X.; Wang, N.; Zhang, N.; Wang, X. The interventional effects of loving-kindness meditation on positive emotions and interpersonal interactions. Neuropsychiatr. Dis. Treat. 2015, 11, 1273–1277. [Google Scholar] [CrossRef]
  32. Hadi, S.A.A.; Gharaibeh, M. The role of self-awareness in predicting the level of emotional regulation difficulties among faculty members. Emerg. Sci. J. 2023, 7, 1274–1293. [Google Scholar] [CrossRef]
  33. Jerath, R.; Crawford, M.W.; Barnes, V.A.; Harden, K. Self-regulation of breathing as a primary treatment for anxiety. Appl. Psychophysiol. Biofeedback 2015, 40, 107–115. [Google Scholar] [CrossRef]
  34. Zaccaro, A.; Piarulli, A.; Laurino, M.; Garbella, E.; Menicucci, D.; Neri, B.; Gemignani, A. How breath-control can change your life: A systematic review on psycho-physiological correlates of slow breathing. Front. Hum. Neurosci. 2018, 12, 353. [Google Scholar] [CrossRef]
  35. Bentley, T.G.K.; D’Andrea-Penna, G.; Rakic, M.; Arce, N.; LaFaille, M.; Berman, R.; Cooley, K.; Sprimont, P. Breathing practices for stress and anxiety reduction: Conceptual framework of implementation guidelines based on a systematic review of the published literature. Brain Sci. 2023, 13, 1612. [Google Scholar] [CrossRef]
  36. Balban, M.Y.; Neri, E.; Kogon, M.M.; Weed, L.; Nouriani, B.; Jo, B.; Holl, G.; Zeitzer, J.M.; Spiegel, D.; Huberman, A.D. Brief structured respiration practices enhance mood and reduce physiological arousal. Cell Rep. Med. 2023, 4, 100895. [Google Scholar] [CrossRef] [PubMed]
  37. Jain, M.; Markan, C.M. Effect of Brief Meditation Intervention on Attention: An ERP Investigation. arXiv 2022, arXiv:2209.12625. [Google Scholar] [CrossRef]
  38. Pilcher, J.J.; Byrne, K.A.; Weiskittel, S.E.; Clark, E.C.; Brancato, M.G.; Rosinski, M.L.; Spinelli, M.R. Brief slow-paced breathing improves working memory, mood, and stress in college students. Anxiety Stress Coping 2025, 2025, 528–543. [Google Scholar] [CrossRef] [PubMed]
  39. Ditto, B.; Eclache, M.; Goldman, N. Short-term autonomic and cardiovascular effects of mindfulness body scan meditation. Ann. Behav. Med. 2006, 32, 227–234. [Google Scholar] [CrossRef]
  40. Mirams, L.; Poliakoff, E.; Brown, R.J.; Lloyd, D.M. Brief body-scan meditation practice improves somatosensory perceptual decision making. Conscious. Cogn. 2013, 22, 348–359. [Google Scholar] [CrossRef]
  41. Dambrun, M.; Berniard, A.; Didelot, T.; Chaulet, M.; Droit-Volet, S.; Corman, M.; Juneau, C.; Martinon, L.M. Unified consciousness and the effect of body scan meditation on happiness: Alteration of inner-body experience and feeling of harmony as central processes. Mindfulness 2019, 10, 1530–1544. [Google Scholar] [CrossRef]
  42. Glissmann, C. Calm College: Testing a Brief Mobile App Meditation Intervention Among Stressed College Students. Master’s Thesis, Arizona State University, Glendale, CA, USA, 2018. [Google Scholar]
  43. Headspace: Meditation & Sleep (Version 3.371.0). Headspace Inc. (Santa Monica, CA, USA). Available online: https://apps.apple.com/us/app/headspace-meditation-health/id493145008 (accessed on 5 May 2025).
  44. Kokkiri (Version 3.4.6). Maeum Sueop. (Seoul, Republic of Korea). Available online: https://apps.apple.com/kr/app/%EC%BD%94%EB%81%BC%EB%A6%AC-%EC%88%98%EB%A9%B4-%EB%AA%85%EC%83%81/id1439995060 (accessed on 5 May 2025).
  45. TIDE: Sleep, Focus, Meditation (Version 4.4.7). Guangzhou Moreless Network Technology Co., Ltd. (Guangzhou, China). Available online: https://apps.apple.com/kr/app/tide-sleep-focus-meditation/id1077776989 (accessed on 5 May 2025).
  46. TaoMix 2-Relax, Sleep, Focus (Version 2.10.04). MWM. (Neuilly-sur-Seine, France). Available online: https://apps.apple.com/kr/app/taomix-2-relax-sleep-focus/id1032493819 (accessed on 5 May 2025).
  47. Meditopia: Sleep, Meditation (Version 4.17.1). Yedi70 Yazilim ve Bilgi Teknolojileri Anonim Sirketi. (Istanbul, Turkey). Available online: https://apps.apple.com/us/app/meditopia-sleep-meditation/id1190294015 (accessed on 10 May 2025).
  48. Edwards, D.J.; Kemp, A.H. A novel ACT-based video game to support mental health through embedded learning: A mixed-methods feasibility study protocol. BMJ Open 2020, 10, e041667. [Google Scholar] [CrossRef]
  49. Kim, H.; Choi, J.; Doh, Y.Y.; Nam, J. The melody of the mysterious stones: A VR mindfulness game using sound spatialization. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar]
  50. Miner, N. Stairway to Heaven: Breathing Mindfulness into Virtual Reality. Master’s Thesis, Northeastern University, Boston, MA, USA, 2022. [Google Scholar]
  51. Thomma, N. Gamified Mindfulness: A Novel Approach to Nontraditional Meditation. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2025. [Google Scholar]
  52. Barrett, L.F.; Gross, J.; Christensen, T.C.; Benvenuto, M. Knowing what you’re feeling and knowing what to do about it: Mapping the relation between emotion differentiation and emotion regulation. Cogn. Emot. 2001, 15, 713–724. [Google Scholar] [CrossRef]
  53. Yildirim, C.; O’Grady, T. The efficacy of a virtual reality-based mindfulness intervention. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Utrecht, The Netherlands, 14–18 December 2020. [Google Scholar]
  54. Kalantari, S.; Xu, T.B.; Mostafavi, A.; Lee, A.; Barankevich, R.; Boot, W.; Czaja, S. Using a nature-based virtual reality environment for improving mood states and cognitive engagement in older adults: A mixed-method feasibility study. Innov. Aging 2022, 6, igac015. [Google Scholar] [CrossRef]
  55. Cerasa, A.; Gaggioli, A.; Pioggia, G.; Riva, G. Metaverse in mental health: The beginning of a long history. Curr. Psychiatry Rep. 2024, 26, 294–303. [Google Scholar] [CrossRef]
  56. Shamim, N.; Wei, M.; Gupta, S.; Verma, D.S.; Abdollahi, S.; Shin, M.M. Metaverse for digital health solutions. Int. J. Inf. Manag. 2025, 83, 102869. [Google Scholar] [CrossRef]
  57. Sindiramutty, S.R.; Jhanjhi, N.Z.; Ray, S.K.; Jazri, H.; Khan, N.A.; Gaur, L. Metaverse: Virtual Meditation. In Metaverse Applications for Intelligent Healthcare; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 93–158. [Google Scholar]
  58. Ahuja, A.S.; Polascik, B.W.; Doddapaneni, D.; Byrnes, E.S.; Sridhar, J. The digital metaverse: Applications in artificial intelligence, medical education, and integrative health. Integr. Med. Res. 2023, 12, 100917. [Google Scholar] [CrossRef]
  59. Song, Y.T.; Qin, J. Metaverse and personal healthcare. Procedia Comput. Sci. 2022, 210, 189–197. [Google Scholar] [CrossRef]
  60. Del Hoyo, Y.L.; Elices, M.; Garcia-Campayo, J. Mental health in the virtual world: Challenges and opportunities in the metaverse era. World J. Clin. Cases 2024, 12, 2939. [Google Scholar] [CrossRef]
  61. Ali, S.; Abdullah; Armand, T.P.T.; Athar, A.; Hussain, A.; Ali, M.; Yaseen, M.; Joo, M.; Kim, H. Metaverse in healthcare integrated with explainable AI and blockchain: Enabling immersiveness, ensuring trust, and providing patient data security. Sensors 2023, 23, 565. [Google Scholar] [CrossRef] [PubMed]
  62. Matamala-Gomez, M.; Maselli, A.; Malighetti, C.; Realdon, O.; Mantovani, F.; Riva, G. Virtual body ownership illusions for mental health: A narrative review. J. Clin. Med. 2021, 10, 139. [Google Scholar] [CrossRef] [PubMed]
  63. Chengoden, R.; Victor, N.; Huynh-The, T.; Yenduri, G.; Jhaveri, R.H.; Alazab, M.; Bhattacharya, S.; Hegde, P.; Maddikunta, P.K.R.; Gadekallu, T.R. Metaverse for healthcare: A survey on potential applications, challenges and future directions. IEEE Access 2023, 11, 12765–12795. [Google Scholar] [CrossRef]
  64. Leandro, J.; Rao, S.; Xu, M.; Xu, W.; Jojic, N.; Brockett, C.; Dolan, B. GENEVA: GENErating and Visualizing branching narratives using LLMs. In Proceedings of the 2024 IEEE Conference on Games (CoG), Milan, Italy, 5–8 August 2024. [Google Scholar]
  65. Kumaran, V.; Rowe, J.; Lester, J. NARRATIVEGENIE: Generating narrative beats and dynamic storytelling with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Lexington, KY, USA, 18–22 November 2024. [Google Scholar]
  66. Nasir, M.U.; James, S.; Togelius, J. Word2world: Generating stories and worlds through large language models. arXiv 2024, arXiv:2405.06686. [Google Scholar]
  67. Buongiorno, S.; Klinkert, L.; Zhuang, Z.; Chawla, T.; Clark, C. PANGeA: Procedural Artificial Narrative Using Generative AI for Turn-Based, Role-Playing Video Games. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Lexington, KY, USA, 18–22 November 2024. [Google Scholar]
  68. Peng, X.; Quaye, J.; Rao, S.; Xu, W.; Botchway, P.; Brockett, C.; Jojic, N.; DesGarennes, G.; Lobb, K.; Xu, M.; et al. Player-driven emergence in llm-driven game narrative. In Proceedings of the 2024 IEEE Conference on Games (CoG), Milan, Italy, 5–8 August 2024. [Google Scholar]
  69. Rasool, A.; Shahzad, M.I.; Aslam, H.; Chan, V.; Arshad, M.A. Emotion-aware embedding fusion in large language models (Flan-T5, Llama 2, DeepSeek-R1, and ChatGPT 4) for intelligent response generation. AI 2025, 6, 56. [Google Scholar] [CrossRef]
  70. Mixamo. 2025. Available online: https://www.mixamo.com/ (accessed on 2 May 2025).
  71. OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
  72. Wilhelm, S.; Bernstein, E.E.; Bentley, K.H.; Snorrason, I.; Hoeppner, S.S.; Klare, D.; Harrison, O. Feasibility, acceptability, and preliminary efficacy of a smartphone app–led cognitive behavioral therapy for depression under therapist supervision: An open trial. JMIR Ment. Health 2024, 11, e53998. [Google Scholar] [CrossRef] [PubMed]
  73. Braun, S.S.; Roeser, R.W.; Mashburn, A.J. Results from a pre-post, uncontrolled pilot study of a mindfulness-based program for early elementary school teachers. Pilot. Feasibility Stud. 2020, 6, 178. [Google Scholar] [CrossRef] [PubMed]
  74. Cotter, E.W.; Hornack, S.E.; Fotang, J.P.; Pettit, E.; Mirza, N.M. A pilot open-label feasibility trial examining an adjunctive mindfulness intervention for adolescents with obesity. Pilot. Feasibility Stud. 2020, 6, 79. [Google Scholar] [CrossRef] [PubMed]
  75. Kroenke, K.; Spitzer, R.L.; Williams, J.B.W. The PHQ-9: Validity of a brief depression severity measure. J. General. Intern. Med. 2001, 16, 606–613. [Google Scholar] [CrossRef]
  76. Beck, A.T.; Ward, C.H.; Mendelson, M.; Mock, J.; Erbaugh, J. An inventory for measuring depression. Arch. Gen. Psychiatry 1961, 4, 561–571. [Google Scholar] [CrossRef]
  77. Spitzer, R.L.; Kroenke, K.; Williams, J.B.; Löwe, B. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch. Intern. Med. 2006, 166, 1092–1097. [Google Scholar] [CrossRef]
  78. Beck, A.T.; Epstein, N.; Brown, G.; Steer, R.A. An inventory for measuring clinical anxiety: Psychometric properties. J. Consult. Clin. Psychol. 1988, 56, 893. [Google Scholar] [CrossRef]
  79. Cohen, S.; Kamarck, T.; Mermelstein, R. A global measure of perceived stress. J. Health Soc. Behav. 1983, 1983, 385–396. [Google Scholar] [CrossRef]
  80. Bohlmeijer, E.; ten Klooster, P.M.; Fledderus, M.; Veehof, M.; Baer, R. Psychometric properties of the Five Facet Mindfulness Questionnaire (FFMQ) in a Dutch sample. Mindfulness 2011, 2, 90–95. [Google Scholar]
  81. Bond, F.W.; Hayes, S.C.; Baer, R.A.; Carpenter, K.M.; Orcutt, H.K.; Waltz, T.; Zettle, R.D. Preliminary psychometric properties of the Acceptance and Action Questionnaire–II: A revised measure of psychological inflexibility and experiential avoidance. Behav. Ther. 2011, 42, 676–688. [Google Scholar] [CrossRef] [PubMed]
  82. Raes, F.; Pommier, E.; Neff, K.D.; Van Gucht, D. Construction and factorial validation of a short form of the Self-Compassion Scale. Clin. Psychol. Psychother. 2011, 18, 250–255. [Google Scholar] [CrossRef] [PubMed]
  83. Rosenberg, M. Society and the Adolescent Self-Image; Princeton University Press: Princeton, NJ, USA, 1965. [Google Scholar]
  84. Kang, M. Effects of Enneagram Program for Self-Esteem, Interpersonal Relationships, and GAF in Psychiatric Patients. Ph.D. Thesis, Seoul National University, Seoul, Republic of Korea. [CrossRef]
  85. Shapiro, S.S.; Wilk, M.B. An analysis of variance test for normality (complete samples). Biometrika 1965, 52, 591–611. [Google Scholar] [CrossRef]
  86. Smirnov, N. Table for estimating the goodness of fit of empirical distributions. Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar] [CrossRef]
  87. Tsagris, M.; Alenazi, A.; Verrou, K.M.; Pandis, N. Hypothesis testing for two population means: Parametric or non-parametric test. J. Stat. Comput. Simul. 2020, 90, 252–270. [Google Scholar] [CrossRef]
  88. Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  89. Ghasemi, A.; Zahediasl, S. Normality tests for statistical analysis: A guide for non-statisticians. Int. J. Endocrinol. Metab. 2012, 10, 486. [Google Scholar] [CrossRef]
  90. Akbar, Z.; Ghani, M.U.; Aziz, U. Boosting Viewer Experience with Emotion-Driven Video Analysis: A BERT-based Framework for Social Media Content. J. Artif. Intell. 2025, 1, 3–11. [Google Scholar]
  91. Demszky, D.; Movshovitz-Attias, D.; Ko, J.; Cowen, A.; Nemade, G.; Ravi, S. GoEmotions: A dataset of fine-grained emotions. arXiv 2020, arXiv:2005.00547. [Google Scholar]
  92. SamLowe. Hugging Face. Available online: https://huggingface.co/SamLowe/roberta-base-go_emotions/ (accessed on 6 September 2025).
  93. Grootendorst, M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv 2022, arXiv:2203.05794. [Google Scholar]
  94. Barth, L.; Kobourov, S.; Pupyrev, S.; Ueckerdt, T. On Semantic Word Cloud Representation. arXiv 2013, arXiv:1304.8016. [Google Scholar]
  95. Schubert, E.; Spitz, A.; Weiler, M.; Geiß, J.; Gertz, M. Semantic word clouds with background corpus normalization and t-distributed stochastic neighbor embedding. arXiv 2017, arXiv:1708.03569. [Google Scholar] [CrossRef]
  96. Rasool, A.; Aslam, S.; Hussain, N.; Imtiaz, S.; Riaz, W. nbert: Harnessing nlp for emotion recognition in psychotherapy to transform mental health care. Information 2025, 16, 301. [Google Scholar] [CrossRef]
Figure 1. Metaverse mindfulness meditation system.
Figure 1. Metaverse mindfulness meditation system.
Systems 13 00798 g001
Figure 2. Overall system architecture.
Figure 2. Overall system architecture.
Systems 13 00798 g002
Figure 3. User interaction in the experience-based mindfulness meditation space.
Figure 3. User interaction in the experience-based mindfulness meditation space.
Systems 13 00798 g003
Figure 4. Space of user immersion induction module: (a) home; (b) interview room; (c) subway; (d) restaurant; and (e) office.
Figure 4. Space of user immersion induction module: (a) home; (b) interview room; (c) subway; (d) restaurant; and (e) office.
Systems 13 00798 g004
Figure 5. Architecture for immersion induction in the metaverse.
Figure 5. Architecture for immersion induction in the metaverse.
Systems 13 00798 g005
Figure 6. Example of a scenario generated by LLM.
Figure 6. Example of a scenario generated by LLM.
Systems 13 00798 g006
Figure 7. Architecture for crowd simulation.
Figure 7. Architecture for crowd simulation.
Systems 13 00798 g007
Figure 8. Reflective emotion awareness GUI.
Figure 8. Reflective emotion awareness GUI.
Systems 13 00798 g008
Figure 9. Space of inner reflection (= mirror space).
Figure 9. Space of inner reflection (= mirror space).
Systems 13 00798 g009
Figure 10. User interaction and feedback in the breathing-focused meditation space.
Figure 10. User interaction and feedback in the breathing-focused meditation space.
Systems 13 00798 g010
Figure 11. Architecture for breathing-focused meditation in the metaverse.
Figure 11. Architecture for breathing-focused meditation in the metaverse.
Systems 13 00798 g011
Figure 12. Space of breathing-focused meditation module: (a) beach; (b) forest; (c) fire camp; (d) meeting room; (e) bedroom.
Figure 12. Space of breathing-focused meditation module: (a) beach; (b) forest; (c) fire camp; (d) meeting room; (e) bedroom.
Systems 13 00798 g012
Figure 13. Body scan detail picture.
Figure 13. Body scan detail picture.
Systems 13 00798 g013
Figure 14. Real-time breathing synchronization interface.
Figure 14. Real-time breathing synchronization interface.
Systems 13 00798 g014
Figure 15. User interface of breathing-focused meditation module, (a) breathing start; (b) first 15 breathing guides; (c) 50 breathing.
Figure 15. User interface of breathing-focused meditation module, (a) breathing start; (b) first 15 breathing guides; (c) 50 breathing.
Systems 13 00798 g015
Figure 16. Breathing performance user interface. The asterisk (*) in each label denotes an explanatory description of the corresponding metric.
Figure 16. Breathing performance user interface. The asterisk (*) in each label denotes an explanatory description of the corresponding metric.
Systems 13 00798 g016
Table 1. Functional analysis and comparison of meditation systems.
Table 1. Functional analysis and comparison of meditation systems.
System FunctionsCalm
[42]
Headspace
[43]
Kokkiri
[44]
Tide
[45]
Tao
Mix2
[46]
Meditopia
[47]
ACTing Mind
[48]
TMMS
[49]
Stairway to Heaven
[50]
MindFlourish
[51]
Our
Foundational
mindfulness training
Personalized and guided meditation
Cognitive regulation
and emotional
management
Tracking breathing data and feedback
Gamification
Multisensory
(audiovisual)
experience
Immersive virtual
environment
“✓” indicates presence. TMMS: The Melody of the Mysterious Stones.
Table 2. Calculation method of the questionnaire used.
Table 2. Calculation method of the questionnaire used.
No.QuestionnaireCalculation MethodFormula
1Acceptance and Action Questionnaire-II (AAQ-II)Total score calculation A A Q   S c o r e = i = 1 7 I t e m   S c o r e i
2Beck Anxiety Inventory (BAI)Total score calculation B A I   S c o r e = i = 1 21 I t e m   S c o r e i
3Beck Depression Inventory (BDI)Total score calculation B D I   S c o r e = i = 1 21 I t e m   S c o r e i
4Five Facet Mindfulness Questionnaire (FFMQ-15)Subscale score calculation F F M Q   S c o r e K = i Q k I t e m   S c o r e i 3
A g g r e g a t e   F F M Q   S c o r e = i = 1 15 I t e m   S c o r e i 15
5GAD-7Total score calculation G A D 7   S c o r e = i = 1 7 I t e m   S c o r e i
6Metaverse Content Suitability Assessment (MCSA)Subjective evaluationN/A
7Patient Health Questionnaire-9 (PHQ-9)Total score calculation P H Q 9   S c o r e = i = 1 9 I t e m   S c o r e i
8Perceived Stress Scale (PSS)Total score calculation P S S   S c o r e = i = 1 10 I t e m   S c o r e i
9Rosenberg Self-Esteem Scale (RSES)Total score calculation R S E S   S c o r e = i = 1 10 I t e m   S c o r e i
10Self-Compassion Scale—Short Form (SCS-SF)Average score calculation S C S   S c o r e = i = 1 12 I t e m   S c o r e i 12
Table 3. Participants’ demographic characteristics.
Table 3. Participants’ demographic characteristics.
CharacteristicsCategoryFrequencyPercentage (%)
GenderMale929.0
Female2271.0
Age18–1913.2
20–291961.3
30–39516.1
40–49619.4
Table 4. Normality test results for pre–post difference scores.
Table 4. Normality test results for pre–post difference scores.
Kolmogorov–Smirnov (a)Shapiro–Wilk
StatisticSig.(p)StatisticSig.(p)
Depression
BDI0.1230.200 *0.9640.364
PHQ-90.1420.1150.9510.162
Anxiety
BAI0.150.0740.9660.409
GAD-70.1770.0150.9290.042
Stress
PSS0.1310.190.9510.164
Mindfulness
FFMQ-15Aggregate0.0980.200 *0.9790.798
Observing0.1550.0540.9680.476
Describing0.1270.200 *0.9290.042
Acting awareness0.1140.200 *0.9590.283
Nonjudging0.1690.0250.950.154
Nonreactivity0.1430.1080.9650.39
Psychological
AAQ-II0.1060.200 *0.9650.391
Self-Compassion
SCS-SF0.170.0230.8070
Self-Esteem
RSES0.1640.0330.8890.004
BDI, Beck Depression Inventory; PHQ-9, Patient Health Questionnaire-9; BAI, Beck Anxiety Inventory; GAD-7, Generalized Anxiety Disorder-7; PSS, Perceived Stress Scale; FFMQ-15, Five Facet Mindfulness Questionnaire-15; AAQ-II, Acceptance and Action Questionnaire-II; SCS-SF, Self-Compassion Scale—Short Form; RSES, Rosenberg Self-Esteem Scale; Statistic, test statistic value from the corresponding statistical test (e.g., t, Z, W); Sig., significance level (p-value); * indicates that the value is the upper bound of true significance.
Table 5. Pre–post intervention comparison of psychological distress.
Table 5. Pre–post intervention comparison of psychological distress.
Mean (SD)
Intervention (n = 31)
Psychological MeasurePrePostt/ZpEffect Size
Depression
BDI13.10 (7.83)7.94 (5.78)4.462<0.0010.801(d)
PHQ-97.48 (5.21)3.42 (3.17)6.804<0.0011.222(d)
Anxiety
BAI9.52 (7.27)6.10 (5.86)4.217<0.0010.757(d)
GAD-75.52 (4.58)1.97 (2.21)−4.217 (Z)<0.0010.757 (r)
Stress
PSS18.84 (3.91)16.39 (3.99)3.1320.0040.563(d)
BDI, Beck Depression Inventory; PHQ-9, Patient Health Questionnaire-9; BAI, Beck Anxiety Inventory; GAD-7, Generalized Anxiety Disorder-7; PSS, Perceived Stress Scale; SD, standard deviation. t, paired-sample t-test; Z, Wilcoxon signed-rank test; d, Cohen’s d (effect size for t-test); r, Cohen’s r (effect size for Wilcoxon test); p, p-values indicate statistical significance, where p < 0.05 is considered statistically significant.
Table 6. Pre–post intervention comparisons for mindfulness facets.
Table 6. Pre–post intervention comparisons for mindfulness facets.
Mean (SD)
Intervention (n = 31)
Mindfulness FacetPrePostt/ZpEffect Size
Aggregate3.09 (0.59)3.35 (0.57)−3.1530.002(one-tailed)0.566(d)
Observing3.50 (1.06)3.31 (1.23)−1.6830.051(one-tailed)0.302(d)
Describing3.16 (1.21)3.51 (1.16)−1.883 (Z)0.06
Acting awareness3.46 (1.15)3.87 (0.88)−1.8520.037(one-tailed)0.333(d)
Nonjudging3.32 (1.11)3.13 (1.25)0.8990.188(one-tailed)0.157(d)
Nonreactivity2.59 (1.00)2.94 (1.15)−1.7740.043(one-tailed)0.319(d)
The describing facet was analyzed using the Wilcoxon signed-rank test (Z-value); all other facets were analyzed using paired-sample t-tests. t, paired-sample t-test; Z, Wilcoxon signed-rank test; d, Cohen’s d (effect size for t-test); r, Cohen’s r (effect size for Wilcoxon test); p, p-values indicate statistical significance, where p < 0.05 is considered statistically significant.
Table 7. Kaiser–Meyer–Olkin measure and Bartlett’s test of sphericity for mindfulness facets.
Table 7. Kaiser–Meyer–Olkin measure and Bartlett’s test of sphericity for mindfulness facets.
Kaiser–Meyer–Olkin Measure of Sampling Adequacy0.598
Bartlett’s test of sphericityApproximate chi-square (χ2)184.799
Degrees of freedom105
Significance (p)<0.001
Table 8. Rotated factor loadings table for mindfulness facets.
Table 8. Rotated factor loadings table for mindfulness facets.
ItemFactor 1Factor 2Factor 3Factor 4Factor 5
Nonreactivity30.8380.0040.0270.028−0.183
Nonreactivity10.7890.0310.1260.252−0.036
Observing20.6500.2160.080−0.3000.060
Describing10.0780.9310.1650.049−0.029
Describing20.0590.8890.3280.045−0.006
Observing30.1030.583−0.049−0.5400.090
Acting Awareness20.1740.2110.886−0.0500.095
Acting Awareness3−0.0380.1500.819−0.0810.207
Acting Awareness10.2220.1420.5290.6420.014
Nonjudging3−0.2900.082−0.0260.6580.372
Observing1−0.1570.0840.213−0.8420.102
Nonjudging1−0.205−0.1110.0230.1510.814
Nonjudging20.1980.0540.3410.1430.716
Nonreactivity20.328−0.004−0.2470.240−0.707
Describing30.3150.285−0.171−0.1840.439
A factor is a latent dimension from EFA; entries are varimax-rotated loadings (item–factor correlations, sign = direction), with |loading| ≥ 0.40 treated as salient and bold cells marking the items retained to define each factor. When an item cross-loaded, we assigned it to the factor with the larger loading that best fit the FFMQ-15 theory.
Table 9. Pre–post intervention comparison of psychological flexibility.
Table 9. Pre–post intervention comparison of psychological flexibility.
Mean (SD)
Intervention (n = 31)
Psychological MeasurePrePosttpEffect Size
AAQ-II24.81 (10.97)21.90 (9.62)2.0240.026 (one-tailed)0.364 (d)
AAQ-II, Acceptance and Action Questionnaire-II; t, paired-sample t-test; d, Cohen’s d (effect size for t-test); p, p-values indicate statistical significance, where p < 0.05 is considered statistically significant.
Table 10. Pre–post intervention comparisons for self-compassion and self-esteem.
Table 10. Pre–post intervention comparisons for self-compassion and self-esteem.
Mean (SD)
Intervention (n = 31)
Psychological MeasurePrePostZpEffect Size
Self-Compassion
SCS-SF2.78 (0.76)3.18 (0.68)−3.3800.0010.607(r)
Self-Esteem
RSES33.00 (8.91)36.06 (7.84)−2.9750.0030.534(r)
SCS-SF, Self-Compassion Scale—Short Form; RSES, Rosenberg Self-Esteem Scale. Z, Wilcoxon signed-rank test; r, Cohen’s r (effect size for Wilcoxon test); p, p-values indicate statistical significance, where p < 0.05 is considered statistically significant.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, A.; Lee, G.; Liu, Y.; Zhang, M.; Jung, S.; Park, J.; Rhee, J.; Cho, K. Development and Evaluation of an Immersive Metaverse-Based Meditation System for Psychological Well-Being Using LLM-Driven Scenario Generation. Systems 2025, 13, 798. https://doi.org/10.3390/systems13090798

AMA Style

Yu A, Lee G, Liu Y, Zhang M, Jung S, Park J, Rhee J, Cho K. Development and Evaluation of an Immersive Metaverse-Based Meditation System for Psychological Well-Being Using LLM-Driven Scenario Generation. Systems. 2025; 13(9):798. https://doi.org/10.3390/systems13090798

Chicago/Turabian Style

Yu, Aihe, Gyuhyuk Lee, Yu Liu, Mingfeng Zhang, Seunga Jung, Jisun Park, Jongtae Rhee, and Kyungeun Cho. 2025. "Development and Evaluation of an Immersive Metaverse-Based Meditation System for Psychological Well-Being Using LLM-Driven Scenario Generation" Systems 13, no. 9: 798. https://doi.org/10.3390/systems13090798

APA Style

Yu, A., Lee, G., Liu, Y., Zhang, M., Jung, S., Park, J., Rhee, J., & Cho, K. (2025). Development and Evaluation of an Immersive Metaverse-Based Meditation System for Psychological Well-Being Using LLM-Driven Scenario Generation. Systems, 13(9), 798. https://doi.org/10.3390/systems13090798

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop