Next Article in Journal
APAR: A Structural Design and Guidance Framework for Gamification in Education Based on Motivation Theories
Previous Article in Journal
From Cues to Engagement: A Comprehensive Survey and Holistic Architecture for Computer Vision-Based Audience Analysis in Live Events
Previous Article in Special Issue
Interaction with Tactile Paving in a Virtual Reality Environment: Simulation of an Urban Environment for People with Visual Impairments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review

by
Vimala Perumal
1 and
Zeeshan Jawed Shah
1,2,*
1
Faculty of Creative Multimedia, Multimedia University, Cyberjaya 63100, Malaysia
2
College of Architecture & Design, Prince Mohammad Bin Fahd University, Al Khobar 31952, Saudi Arabia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2026, 10(1), 9; https://doi.org/10.3390/mti10010009
Submission received: 13 August 2025 / Revised: 2 November 2025 / Accepted: 12 November 2025 / Published: 9 January 2026

Abstract

Haptic feedback is important for narrative virtual reality (VR), yet authoring remains costly and difficult to scale due to device-specific tuning, placement constraints, and the need for semantically congruent timing. We systematically reviewed user studies on haptics in narrative VR to establish an empirical baseline and identify gaps for AI-powered procedural haptics. Following PRISMA 2020, we searched IEEE Xplore, ACM Digital Library, Scopus, Web of Science, PubMed, and PsycINFO (English; human participants; haptics synchronized to narrative events) and performed backward/forward citation chasing (final search: 31 July 2025). We also conducted a parallel scoping scan of grey literature (arXiv and CHI/SIGGRAPH workshops/demos), finalized on 7 September 2025; these records are summarized separately and were not included in the evidence synthesis. Of 493 records screened, 26 full texts were assessed, and 10 studies were included. Quantitatively, presence improved in 6/8 studies that measured it and immersion improved in 3/3; sample sizes ranged 8–108. Across varied modalities and placements, haptics improved presence and immersion and often enhanced affect; validated measures of narrative comprehension were rare. None of the included studies evaluated AI-generated procedural haptics in user studies. We conclude by proposing a structured, three-phase research roadmap designed to bridge this critical gap, moving the field from theoretical promise to the empirical validation of intelligent systems capable of making rich, adaptive, and scalable haptic narratives a reality.

Graphical Abstract

1. Introduction

1.1. The Sensory Frontier of Digital Narratives: Haptics in VR Storytelling

As VR matures as a storytelling medium, design has shifted from a narrow focus on graphics to multisensory experience. The medium is defined less by hardware and more by its capacity to create telepresence—the feeling of “being there” despite physical location [1]. The ability to see and hear and feel in a virtual environment is one of the principal frontiers of creating truly compelling interactive experiences [2]. Encompassing the modalities of touch, force, and motion [3], haptic feedback is a key connection between the user’s physical body and the digital story world, changing them from passive observer to active participant [4]. The inclusion of haptics is a crucial step in providing a realistic and immersive user experience in VR, advancing it beyond the visual and auditory paradigms that have defined digital media for the past several decades.
Touch is an active sense that allows exploratory, bidirectional interaction with the environment, unlike the largely passive intake of vision or audition [5]. Haptic perception spans cutaneous (tactile) and kinesthetic/force components and related motion cues [3]. In VR/HCI studies, adding haptics reliably enriches interaction and often increases perceived realism and presence [6,7].

1.2. The Psychological Impact of Haptics: Presence, Embodiment, and Emotion

The effects of haptics are measured in terms of several major user experience (UX) outcomes key to the effectiveness of VR storytelling. An understanding of these concepts according to core literature is necessary to evaluate the state of the field.
  • Immersion: Technically defined as an objective property of a technology, immersion describes the degree to which a VR system is capable of providing the user with an illusion of coherent, enveloping sensory input and removing the user from his or her sensation of the physical world [8].
  • Factors affecting immersion include the quality of displays, the number of sensory modalities that are rendered, and the system’s latency in tracking user movement [9].
  • Presence: Generally defined as a subjective psychological result of immersion, presence refers to the feeling of “being there” in the virtual world [10,11]. It refers to a state of consciousness in which the manipulated sensory models of the virtual world predominate, causing the user to behave and respond as if he or she were part of the virtual world and the events occurring within it were real [12]. A high sense of presence is a chief goal of most VR applications, especially those focusing on narrative and training.
  • Narrative Comprehension: The user’s cognitive ability to follow, understand, and interpret a story’s plot, character motives, and themes. According to cognitive models of discourse comprehension, narrative comprehension is the process of constructing a coherent mental “situation model” of the story world [13]. Ideally, the role of haptics would be to clarify or bolster narrative information over the course of such construction and not merely constitute a distracting sensory embellishment.
  • Emotional Engagement: The valence and intensity of the user’s affective response to a narrative. Examples include empathy with characters, anxiety during suspenseful plot points, and arousal in response to story action. Touch is an inherently affective modality, which makes haptics a powerful candidate for emotional modulation [6]. Affective haptics research has demonstrated that localized patterns of tactile stimuli can be used to classify and communicate specific emotions, indicating that haptic feedback can potentially serve as a direct channel of emotional contagion between a user and a virtual character or narrative [14].

1.3. The Authoring Dilemma: From Manual Craftsmanship to Automated Generation

Despite the well-established benefits, a production problem significantly hinders the widespread adoption of high-quality haptic feedback for VR narratives. Haptic authoring is a specific example of the “authorial burden” or “authoring bottleneck” problem that plagues all interactive media: since content demands can increase exponentially with the number of choices for interaction, this makes manual creation of interactivity prohibitively expensive and time-consuming [15]. Two very different paradigms of haptic creation exist. The traditional and contemporary “gold standard” is manual authoring, in which designers painstakingly handcraft and script cues for specific moments of a narrative. Manual authoring affords a high degree of artistic control and nuance and can lead to evocative, contextually situated sensory experiences. The set of studies surveyed in this review showcases some of the best results of manual authoring: Hand-curated vibrotactile signals meant to evoke distinct affective sensations in a haunted house narrative [16], carefully timed electrical muscle stimulation (EMS) coincident with non-interactive cutscenes [17], and intimate, performer-authored touch in immersive theatre [18] are all various forms of manual authoring that have been shown to impact UX outcomes positively. However, this artisanal approach to haptic creation has serious limitations. Most importantly, it is incredibly resource intensive to produce and is difficult to scale for long-form or intricate, branching narratives. Furthermore, manually authored cues are by their very nature static, unable to dynamically respond in real-time to emergent narrative events or dynamic user behaviors: a critical failure mode of interactive storytelling.
In response to these constraints, AI-powered procedural haptic generation has arisen as a promising alternative to the dominant paradigm. This paradigm is one application of procedural content generation (PCG), a collection of techniques widely applied (and, in most subdomains, heavily researched) across the video game industry to algorithmically generate game content, such as levels, textures, and quests, increasing replayability and reducing authoring burden [19]. Applied to haptics, PCG takes the form of algorithms, machine learning models, or other AI techniques to generate haptic feedback autonomously in real time. Such systems can account for many kinds of inputs—narrative state, in-world physics, audio–visual cues, and even user biometrics—to synthesize appropriate haptic responses on the fly. The possible advantages are outstanding: increased authoring efficiency, accommodation of large and dynamic worlds, and the ability to deliver unique, personalized, adaptive haptic experiences exclusive to each user’s journey through the story.

1.4. Rationale and Objectives

While the theoretical promise of AI-procedural haptics is compelling, a consolidated understanding of its empirical impact on user experience is conspicuously absent from the scientific literature. A systematic review is therefore necessary to first map the existing evidence from established non-AI methods to identify what is known about effective haptic design in storytelling and, critically, to determine what remains unknown. This synthesis can provide an empirical baseline against which future automated systems can be compared and guide the development of more efficient and effective VR storytelling tools.
Primary research question (reframed): In immersive VR storytelling, what user-experience outcomes are associated with synchronized manually authored haptics (vs. audio–visual only), and what baseline patterns emerge that can inform future AI-driven procedural approaches?
Objectives:
  • Synthesize the empirical effects of synchronized (non-AI) haptics on presence, immersion, emotion, and narrative comprehension across study designs.
  • Characterize design factors (e.g., modality, narrative congruence, timing/placement) associated with stronger outcomes.
  • Establish a baseline for comparative evaluation by future AI-procedural haptics research and clearly identify gaps (e.g., missing effect sizes, scarce comprehension measures).
  • Map threats to validity and reporting issues to support stronger study design and reporting standards in this emerging area [20].
As will be demonstrated, the results of the systematic search fundamentally influenced the contribution of this review, reformulating it from a direct comparison to a critical analysis of a major research shortfall.

2. Methods

2.1. Protocol, Registration, and Reporting Guideline

This review followed the PRISMA 2020 reporting guideline [20]. No prior registration was filed because the most widely used registry for health and social-care reviews [21] does not accept non-health HCI topics; its scope explicitly limits registrations to health-related questions and excludes methodological/technology-only reviews [21]. In HCI and adjacent computing fields, OSF Registries are commonly used for timestamped preregistration outside PROSPERO’s scope; here we followed PRISMA 2020 closely and provide full, reproducible methods (search strings, flow, extraction forms) to ensure transparency [20,22]. In future updates, we plan to preregister the protocol on OSF to further strengthen transparency and credibility.

2.2. Eligibility Criteria

  • Population (P): Human participants engaging with narrative virtual reality (VR) experiences. This is defined as experiences with a predefined story structure, whether linear or branching, thereby excluding purely sandbox environments or goal-oriented gameplay without a clear narrative arc.
  • Intervention (I): Haptic feedback synchronized to narrative events. This includes manually authored cues (hand-scripted to specific timestamps or events) and rule-based procedural cues (e.g., direct, deterministic mapping of a character’s biosignals to haptic output). The search explicitly targeted, but did not find, studies employing adaptive, AI-driven generative models for haptic synthesis.
  • Comparison (C): A no-haptics control (audio–visual only) and an alternative haptic strategy (e.g., vibrotactile vs. EMS; different timing/placement/intensity).
  • Outcomes (O): At least one user-experience outcome relevant to narrative VR: presence, immersion, emotional/affective engagement, and narrative comprehension. Validated instruments or defined custom measures were accepted.
  • Study design (S): Experimental, quasi-experimental, or mixed-methods user studies conducted with human participants and published in peer-reviewed journals or conference proceedings in English.
Exclusions: Reviews, commentaries, theoretical papers, qualitative-only work without a user study, purely technical/system papers without human evaluation, non-immersive or desktop/AR-only studies, and studies lacking synchronized haptics.

2.3. Information Sources and Search Strategy

We searched IEEE Xplore, ACM Digital Library, Scopus, Web of Science Core Collection, PubMed, and PsycINFO from inception through 31 July 2025 (Asia/Al Khobar). Search strings followed four concept blocks (VR/storytelling, haptics, AI/procedural, and user-experience outcomes); full PRISMA-S strings and filters appear in Appendix A. We supplemented our database searches by conducting backward and forward citation chasing on the included studies, as well as hand-searching reference lists and relevant venues; Google Scholar was utilized solely for citation chasing and hand-searching, rather than as a primary database. Records were managed in Zotero 6 using a multi-pass de-duplication protocol (exact DOI, normalized title + first-author + year, and fuzzy title with manual inspection; conference/journal/preprint version policy as above).
In parallel, we scanned arXiv and program pages/proceedings for ACM CHI and SIGGRAPH workshops/demos to map preliminary AI-driven haptic generation efforts relevant to narrative VR, using the same concept blocks; the grey-literature scoping had a final search date of 7 September 2025. Because these items typically lacked human-participant user studies, they were not included in the PRISMA flow or evidence synthesis; we summarize them descriptively in Results (Section 3.2) and list them in Appendix B (Table A1).
Record management & de-duplication: Database exports (RIS/CSV with abstracts) were imported into Zotero 6 and normalized (title lower-casing, punctuation stripping; DOI normalization). We applied a multi-pass de-duplication protocol:
  • Exact DOI match (auto-merge) → remove exact duplicates.
  • Title + first-author + year (normalized) → merge where all three match;
  • Fuzzy title match (≥0.85 Jaro–Winkler) with manual inspection for near-duplicates and conference/journal pairs.
  • Version policy: where a peer-reviewed version existed, corresponding preprints were retained only in the grey-literature list and excluded from PRISMA counts; where conference and journal versions existed, we retained the most complete archival version and merged metadata.
This procedure removed 73 duplicates (57 DOI matches; 16 near-duplicates/manual merges).

2.4. Selection and Data Extraction Process

All retrieved records were imported into a reference manager and de-duplicated. Two reviewers independently screened titles and abstracts against the eligibility criteria; any record judged potentially eligible by either reviewer proceeded to full-text review. The same two reviewers independently assessed full texts; disagreements at either stage were resolved by discussion and consensus.
Data were extracted using a standardized form by one reviewer and independently verified by a second. Extracted items included study identifiers (authors, year), study design, narrative context, haptic modality/placement and synchronization method, sample characteristics, outcome measures and statistics, and main findings relevant to the research questions. A consolidated summary of study characteristics and quantitative outcomes is provided in Table 1. The information flow through the review is shown in the PRISMA 2020 flow diagram (Figure 1), generated using the PRISMA 2020 Shiny App Version 2020 [23].

Effect Sizes (Calculation and Reporting)

Effect sizes. Where sufficient statistics were reported, we computed or extracted standardized effect sizes (Hedges’ g for mean differences; ηp2 for ANOVA; r for correlations) following established guidance [30]. When authors provided only p-values without descriptive statistics, we reported the direction of effect and noted “p NR” (not reported). We encourage future primary studies to report means/SDs or confidence intervals to facilitate meta-analysis [30].

2.5. Grey-Literature Scoping: Descriptive, Non-Synthesized (Methods = How You Searched)

  • Sources: arXiv (preprints); ACM CHI and SIGGRAPH workshop/demo/program pages.
  • Purpose: Map preliminary AI-based haptic-generation approaches relevant to narrative VR.
  • Screening & extraction: One reviewer screened records and extracted the title, venue/source, year, AI approach, haptic method, VR-narrative relevance, presence of user study (Y/N), and basic device/placement/synchronization details. A second reviewer verified entries.
  • Integration: Grey-literature items were not included in the PRISMA flow or the evidence synthesis and were not quality-appraised. A brief descriptive summary appears in Results (Section 3.2), and full entries are listed in Appendix B (Table A1). The grey-literature scoping was conducted on 7 September 2025 (Asia/Al Khobar).

2.6. Study-Level Quality Appraisal

We appraised each included study using the Mixed Methods Appraisal Tool [31], which supports qualitative, quantitative, and mixed-methods designs [31]. We scored the five MMAT criteria applicable to each design and reported item-level counts (Appendix C) rather than a single composite score, as recommended by the MMAT authors [31].

3. Results

3.1. Overview of Included Studies

We restricted inclusion to peer-reviewed human-participant studies with synchronized haptics in narrative VR (PICOS); the other records failed one or more criteria at title/abstract or full-text screening (reasons in Figure 1). We identified 493 records, removed 73 duplicates, screened 420 titles/abstracts, assessed 26 full-text articles, excluded 16 at full text for stated reasons, and included 10 studies in the review. Presence improved in 6/8 studies, while immersion increased in 3/3 that assessed it (Table 1). Across the corpus, sample sizes ranged from 8 to 108. Two studies reported no presence difference. The systematic search and screening strategy produced a final corpus of ten studies that met the eligibility criteria related to the evaluation of haptic feedback in VR storytelling contexts. A critical and distinctive outcome of the screening process is that none of the identified studies explored AI-powered procedural haptic feedback. All ten included studies utilized haptic systems that were either explicitly described as manually authored, physiologically based (which involves manually mapping biosignals to haptic output), or unspecified but aligned with manual, event-based scripting. This review therefore synthesizes the evidence from this non-AI haptics research base.
Table 1 summarizes the characteristics and quantitative outcomes of the ten included studies (effect direction and statistics, where reported, appear in the rightmost column), which span a broad range of experimental designs, haptic modalities, and narrative contexts. All included studies were published between 2017 and 2025 (final search: 31 July 2025). All studies employed experimental or mixed-methods designs; sample sizes ranged from 8 to 108 participants. Narrative contexts included 360° cinematic experiences [29], interactive games [17], social-VR scenarios [14], and immersive performances [18]. The distribution of haptic modalities across the 10 studies is summarized in Figure 2—a form of social touch increasingly studied in HCI [32].
Standardized instruments: To enable comparability across studies, standardized questionnaires should be preferred: for presence/immersion, the PQ [10], IPQ [33], and ITC-SOPI [34]; for immersion, the IEQ [35]; for narrative absorption, the Transportation Scale [36] and its short form TS-SF [37], and the Story World Absorption Scale (SWAS) [38]; and for valence/arousal, the Self-Assessment Manikin (SAM) [39].
A significant variety of haptic technologies have been used, with vibrotactile and thermal feedback being the most popular, and more novel techniques such as electrical muscle stimulation (EMS) and direct human touch.
The studies used a wide range of hardware systems, ranging from custom-built research prototypes (e.g., wearable vests, EMS systems) to the performer directly facilitating the interaction.

3.2. Grey-Literature Scoping: AI-Powered Procedural Haptics (Descriptive Results)

Our scan (final date: 7 September 2025) identified several preprint and workshop/demo efforts proposing AI-based haptic-generation pipelines for VR (e.g., rule-based or ML mappings from audiovisual/narrative events to haptic patterns and LLM-assisted authoring of haptic scripts). None reported peer-reviewed human-participant user studies in narrative VR with synchronized haptics, so they were not included in PRISMA counts or the evidence synthesis. Table 2 summarizes these items at a glance (full details remain in Appendix B, Table A1) to aid discovery and future controlled evaluation.

3.3. The Impact of Haptic Feedback on Immersion and Presence

This review primarily focused on affective and experiential outcomes in the included studies. Figure 3 illustrates that while researchers frequently studied presence and immersion outcomes, they largely overlooked cognitive outcomes like narrative comprehension. In this section, we synthesize findings for the most commonly studied outcomes: immersion and presence.
The clearest and most reliable result is that adding haptic feedback—across modalities—produces a higher sense of presence and immersion than audio–visual-only VR, typically by self-report questionnaires (e.g., IPQ) [33]. This effect appears across varied narrative formats and hardware. For example, Sasikumar et al. [29] observed significantly greater spatial presence, realism, and overall experience (p < 0.01) when wearable (vest) or non-wearable (wind, floor vibration) haptics were added to a 360° cinematic battle scene. Krogmeier et al. [27] found higher presence (p = 0.004) and embodiment (p = 0.005) when participants experienced a vibrotactile “bump” from a virtual human versus a no-haptics control. Similarly, García-Valle et al. [25] concluded that a tactile-thermal vest supported users’ sense of presence and realism in a virtual train station. As shown in Figure 4, presence showed a significant positive effect in 4/8 studies (2/8 non-significant; 2/8 reported improvements without p-values); immersion showed significant improvement in 3/3 studies that measured it; emotion findings were mixed (1/5 significant positive; 1/5 significant discomfort; 1/5 non-significant; 2/5 no p-values); and narrative comprehension was not measured in this corpus (see Table 3).
Beyond a generic benefit of “any haptics,” the evidence suggests a hierarchy of effect tied to fidelity and narrative congruence: cues that are context-specific and physically congruent with on-screen action yield larger gains. Khamis et al. [17] directly compared kinesthetic EMS to standard vibrotactile feedback in VR game cutscenes and found EMS—moving the user’s limb in time with avatar/scene events—more realistic and higher in presence (p < 0.05) than controller vibration. In a heteromodal context, Clepper et al. [16] showed that complex, hand-crafted vibrotactile signals tied to specific narrative events (e.g., a heartbeat, thunder) were more immersive than a single, generic multiplexed signal. Table 3 provides a qualitative synthesis of user-experience outcomes (presence, immersion, affect, and narrative comprehension) across the ten included studies, while Table 2 presents a non-synthesized snapshot of the procedural-haptics grey literature that motivated this review.

3.4. Narrative Comprehension: The Overlooked Dimension

In stark contrast to the extensive study of presence and emotion, narrative comprehension is the most understudied outcome of haptics. As shown in Table 1 (Outcomes & Measures column), none of the ten studies used a validated quantitative narrative-comprehension instrument. Desnoyers-Stewart et al. [18] reported qualitatively that full performer touch appeared to improve “narrative engagement”, but the improvement was not measured with any validated instrument. Hecquard et al. [14] tracked “time spent looking at the presenter” as a behavioral proxy for engagement but did not assess whether participants understood or retained the content of the virtual presentation better. None of the ten studies used a validated quantitative narrative-comprehension instrument, indicating a gap in the corpus.

3.5. Quality Appraisal (MMAT)

We appraised study quality using the Mixed Methods Appraisal Tool (MMAT), version 2018 (screening items S1–S2 and design-specific items) [31]. MMAT item-level tallies are provided in Appendix C. In brief: for the nine quantitative non-randomized experimental papers, most used validated presence/immersion questionnaires (e.g., IPQ, PQ, ITC-SOPI), and reported complete outcome data; however, samples were generally convenience-based (labs, festivals, or university cohorts), and several studies did not explicitly address potential confounders beyond counterbalancing (e.g., order effects). The one mixed-methods paper integrated interviews and questionnaires appropriately but only partially discussed divergences between qualitative and quantitative strands [10,16,17,18,25,26,31,33,34].
Item-level counts (this review, n = 10):
  • Screening (applies to all designs): S1 clear RQs = 10/10 “Yes”; S2 data address RQs = 10/10 “Yes” [31].
  • Quantitative non-randomized (n = 9; MMAT 3.1–3.5):
    • 3.1 representative participants = 0 “Yes”, 9 “No”, 0 “Can’t tell” (convenience samples typical of HCI/VR labs or festivals [18,26]).
    • 3.2 appropriate/validated measurements = 8 “Yes”, 1 “Can’t tell”, 0 “No” (use of IPQ/PQ/ITC-SOPI prevalent [10,26,33,34]).
    • 3.3 complete outcome data = 9 “Yes”, 0 “Can’t tell”, 0 “No”. (Representative examples report complete questionnaires and ANOVA/t-tests [17,25]).
    • 3.4 confounders accounted for = 5 “Yes”, 3 “Can’t tell”, 1 “No” (counterbalancing/within-subjects common, but limited covariate control) [17,26].
    • 3.5 intervention fidelity (administered as intended) = 9 “Yes”, 0 “Can’t tell”, 0 “No” (clear manipulation descriptions; e.g., EMS timing in cutscenes; thermal/vibrotactile setpoints; handcrafted vibrotactile cues) [16,17,26].
  • Mixed-methods (n = 1; MMAT 5.1–5.5, [18]): 5.1 design relevance = 1 “Yes”; 5.2 integration relevance = 1 “Yes”; 5.3 interpretation of integration = 1 “Yes”; 5.4 treatment of divergences = 1 “Can’t tell”; 5.5 component quality = 1 “Yes”.
Common limitations and strengths: Limitations included small, convenience samples and limited confounder handling; strengths included within-subject controls and use of validated instruments (e.g., IPQ, PQ, ITC-SOPI) [10,18,26,33,34].

4. Discussion

  • What this review adds: (I) First PRISMA-style synthesis of user-evaluated haptics in narrative VR, spanning 10 studies and consolidating what reliably improves presence/immersion/affect. (II) An empirical baseline table (Table 1) that unifies design, hardware, synchronization, measures, and quantitative effects usable as a benchmark for future AI systems. (III) Gap formalization: 0/10 studies evaluate AI-procedural haptics with human participants in narrative VR, and 0/10 use validated narrative-comprehension instruments. (IV) An actionable roadmap with experimental design details (non-inferiority margins, authoring-time accounting, latency budgets) to move the field from theory to evidence.
  • Positioning vs. prior “gap” writings: Recent position/overview pieces (e.g., domain-general AI + haptics, medical/rehab applications, and non-archival preprints) acknowledge a gap and propose directions, but they do not: (I) run a PRISMA-style search focused on narrative VR with synchronized haptics, (II) synthesize user-study outcomes, or (III) provide a head-to-head experimental plan with authoring-time endpoints. Our contribution is to establish the evidence baseline for narrative VR, quantify the absence of AI-procedural user studies, and supply a three-phase, testable program.

4.1. Contextual Factors and Haptic Design

Haptic feedback also seems to be, by itself, neither beneficial nor detrimental to immersion: The virtue of a particular signal is contingent on contextual factors, with the greatest mediators being the degree of congruence the signal has with the narrative and the characteristics of the user.
  • Narrative Congruence: Foremost among these moderators is the extent to which a particular haptic signal semantically meshes with the audio–visual events the signal attempts to contextualize. The greatest proponent of this narrative congruence account is the study by Clepper et al. [16]: handcrafted, pre-programmed haptic signals designed to directly align with particular narrative events (i.e., a croaking frog, a heartbeat) were deemed more immersive than a generic and incongruent haptic signal. The implication of this is straightforward: haptics—to even be effective—cannot be imposed blindly without semantic regard to what the story or environment imposes.
  • User Experience and Expectations: The characteristics of the user, more specifically their prior experience or literacy with VR and haptic technologies, ultimately modulate their appraisal of the feedback. In perhaps the only experiment within the literature to select participants entirely based on experience with haptic technology, García-Valle et al. [25] found stark differences in feedback appraisal between seasoned “Haptic Experts” and casual “Non-Experts.” Where non-experts were generally satisfied with a given haptic signal, experts were substantially more discerning and prioritized attributes of realism and synchronization with audio–visual events. Therefore, we can extrapolate that as the general user base of VR becomes more haptically literate, there will be a greater demand for higher-fidelity and more sophisticated haptic feedback.
  • Hardware and Modality: Furthermore, properties of the hardware and modality themselves naturally modulate the user experience. The diversity of modalities presented in this review—from wearable vests to EMS electrodes to non-wearable fans and vibration platforms—is symptomatic of the diversity of existing wearable haptic systems [7]. No one modality, based on our analysis, has an advantage over others; rather, different modalities are successful insofar as they match the narrative being communicated. For example, EMS proved to be highly effective at conveying kinesthetic force feedback in fast-paced action-oriented cutscenes [17], whereas thermal and wind-based modalities were ideal at setting the environmental mood [28]. Comparatively, Sasikumar et al. [29] found no significant difference between presence ratings in wearable versus non-wearable feedback systems, suggesting that how the feedback is embedded in the narrative context may ultimately matter more than the form factor of the feedback in and of itself.

4.2. Haptics as a Channel for Physiological and Emotional Contagion

Beyond a general enhancement of experience, the evidence suggests a specific and powerful mechanism through which haptics can modulate emotion: interoceptive mimicry. Hecquard et al. [14] provide the clearest example, employing ‘sympathetic haptic feedback’ that directly mapped a virtual presenter’s simulated heart rate and breathing patterns onto the participant’s body via a wristband and compression belt. This approach was found to significantly enhance empathy. Similarly, Ooms et al. [26] demonstrated that vibrotactile heartbeats could directly manipulate a participant’s perceived arousal and discomfort. These findings suggest that haptics can function as a direct channel for physiological and emotional contagion. By bypassing cognitive interpretation and directly stimulating the user’s body to mimic the interoceptive state of a virtual character, these systems can foster a more immediate and profound emotional connection, consistent with theories of embodied cognition that link physiological states to emotional experience.

4.3. The State of the Craft: Empirically Validated Principles of Manual Haptic

This systematic review summarizes findings from 10 experimental and mixed-method studies on the role of haptic feedback in VR storytelling in terms of presence, immersion, and an overarching term of experience quality. These results reveal several major findings that capture the landscape of research in this field.
First, there is good consistent evidence to suggest that the integration of manually authored haptic feedback improves user experience, largely by increasing the sense of presence and immersion. This result appears stable over a wide range of haptic modalities, hardware setups, and narrative themes, supporting a basic assumption of immersive design [44].
Second, haptic feedback’s quality and design, not just its presence, affect its efficacy. In particular, we observe a clear trend toward higher-fidelity haptics with greater narrative congruence and increased direct physical correspondence to the virtual action, leading to more significant positive effects on the user experience. For instance, direct kinesthetic feedback in the form of EMS, which directly actuates the user’s limbs, led to a much higher sense of presence than simple vibrotactile feedback [17]. This phenomenon is consistent with the theory of embodiment, which states that a congruent mapping between the user’s body and their virtual avatar is essential to foster a strong sense of presence [45]. Likewise, the observation that specific, semantically coherent vibrotactile cues are more immersive than generic ones [16] suggests that for haptics to be truly effective, they must be well-integrated into the narrative experience, rather than added on as a general-purpose effect.
Third, haptics may be a strong channel for modulating emotional engagement. Systems that match haptic output to the physiological or affective state of a character are capable of increasing empathy [14] or arousal [26]. These studies show the ability for affective haptics to foster strong connections between the user and the story [6].
Finally, the complete absence of validated narrative comprehension measures across all 10 included studies reveals a fundamental blind spot in the field’s research agenda. The prevailing focus has been on the phenomenological question of ‘Does it feel more real?’ at the expense of the semantic question, ‘Does it enhance the story’s meaning?’ This represents a failure to engage with haptics as a true narrative device, relegating it instead to the status of a sophisticated special effect. The challenge for future systems, particularly those powered by AI, is therefore not merely to generate physically congruent sensations, but to generate haptics that clarify plot, reveal character motivations, or underscore thematic elements—functions that require a deep, semantic understanding of the narrative itself. The proposed roadmap is designed specifically to shift the field’s focus towards this more meaningful integration of haptics into the storytelling process.
These findings collectively provide strong empirical support for the foundational theoretical distinction between immersion as an objective property of the technology and presence as the subjective psychological response. Taken together, efficacy depends not just on including haptics but on the quality and narrative specificity of the sensation. The addition of a haptic modality—whether vibrotactile, thermal, or kinesthetic—objectively increases the immersive capacity of the system by engaging an additional sensory channel and increasing the extent of sensory information presented to the user. The consistent finding of increased self-reported presence across these 10 studies demonstrates that this technological enhancement reliably translates to the desired psychological outcome: the subjective feeling of ‘being there’ in the virtual world.

4.4. The Unvalidated Science: The Empirical Void in AI-Powered Procedural Haptics

The primary discovery in this review is the absence of relevant studies. Despite a search strategy designed to systematically identify studies on AI-powered procedural haptics, no experimental or mixed-method user study fulfilling the inclusion criteria was identified. The entire corpus of empirical evidence on haptics in VR storytelling has been built on non-AI, highly manually dependent authoring methods.
This gap underscores a significant methodological turning point for the field. The 10 studies included in this review showcase a mature understanding of how to design efficient manual haptics and what their benefits are. They provide a compendium of successful proof-of-concepts that establish a clear empirical benchmark. At the same time, the innate weaknesses of manual methods—their poor scalability and lack of adaptivity, as noted in the introduction—provide a clear and immediate problem that AI and procedural generation are, in theory, well-equipped to solve. The complete absence of empirical validation of these automated approaches against the established manual benchmarks embodies a fundamental disconnect between the identified needs of the field and its current research priorities. Manual haptics offer high artistic control but are time-intensive, difficult to scale to branching narratives, and largely non-adaptive. AI/procedural methods promise substantial reductions in authoring time, scalability to large and dynamic story worlds, and real-time adaptivity to narrative state and user signals. However, while the manual paradigm has a demonstrated empirical base (the 10 studies in this review), AI-procedural approaches currently lack user-evaluated evidence in narrative VR. This validation gap motivates the three-phase roadmap that follows.
Across the corpus, manual haptics robustly improve presence/immersion/affect, whereas no eligible user studies evaluate AI-procedural haptics in narrative VR. Our roadmap therefore sequences a pragmatic path from data/tools/metrics (Phase I) to head-to-head efficacy (Phase II) and closed-loop adaptivity (Phase III).

4.5. A Research Roadmap for Intelligent Haptic Storytelling

Roadmap anchored to evidence gaps: Our synthesis reveals four recurring deficiencies: (I) scarce, shareable event ↔ haptic datasets with device/placement/latency metadata; (II) limited use of validated narrative-comprehension/engagement measures; (III) no head-to-head tests of AI/procedural vs. high-quality manual authoring (and almost no authoring-time accounting); and (IV) little adaptive/closed-loop control with user-state and safety reporting. The three phases outlined below are intended to close these gaps and are summarized in Figure 5.

4.5.1. Phase I—Data, Tools, and Metrics

  • Open toolkits with data logging: An accessible, open-source authoring and runtime library, provisionally named ‘HapTale,’ will be developed for Unity engine (a widely used real-time 3D development platform for interactive media and XR). HapTale will provide a unified API abstracting low-level control for common actuators (ERM and LRA vibrotactors, Peltier-based thermal modules, and non-invasive EMS units). A critical feature will be its ‘logging-first’ architecture, which automatically timestamps and logs every narrative event (e.g., dialogue_start, event_explosion) and every delivered haptic primitive (e.g., vibration_start, actuator_id = torso_front_left, intensity = 0.8, frequency = 150 Hz, duration = 250 ms) to a standardized JSON format. This will facilitate the creation of shareable datasets.
  • Standardized datasets & reporting: The project will release an initial paired event-to-haptic corpus based on several short narrative scenes. Each entry will include the event timestamp, a semantic event tag, the full haptic primitive description, and detailed metadata including HMD model, actuator model, body placement, and measured end-to-end latency. A minimal methods checklist will be published for reporting haptic studies, ensuring others can reproduce delivery timing and hardware context.
  • Validated outcome battery: To address the critical gap in cognitive assessment, all subsequent studies in this roadmap must include the following core battery of validated instruments; Presence: The Igroup Presence Questionnaire (IPQ); Narrative Engagement: The Narrative Engagement Scale; Story World Absorption: The Story World Absorption Scale (SWAS) and Narrative Comprehension: A 10-item, multiple-choice quiz keyed to specific plot points, character motivations, and thematic details of the stimulus narrative, administered immediately post-experience.
  • Hybrid authoring pilots with cost accounting: Compare AI-seeded + human-edited workflows to fully manual pipelines; log authoring time (person-minutes) and perceived workload (e.g., NASA-TLX) to quantify efficiency.
  • Latency measurement protocol: For each actuator class, report end-to-end event→tactile onset (ms) using a repeatable bench test (audio marker on the narrative timeline; microcontroller logs actuator trigger; contact mic/accelerometer on the device detects onset). Publish raw logs + script.
  • Minimal methods checklist: Include HMD model, actuator model/placement map, calibration routine, per-scene latency budget, and exact synchronization mechanism (timestamp, trigger bus, or physics event).
  • Dataset schema stub (JSON): {event_time_ms, event_tag, haptic:{actuator_id, placement, type, intensity, freq_hz, duration_ms, waveform}, device:{HMD, actuator_model}, latency_ms}.

4.5.2. Phase II—Head-to-Head Efficacy Studies

This phase moves from infrastructure to direct empirical validation, comparing the unvalidated science’ of AI generation against the ‘mature craft’ of manual authoring:
  • Experimental Design: A preregistered non-inferiority trial with a within-subjects crossover design will be conducted. Participants (N ≈ 30) will experience two versions of the same 5-min narrative VR scene. Condition A (Manual Gold Standard): Haptics will be manually authored by an expert designer with significant experience in haptic feedback design (target authoring time: 180 person-minutes). Condition B (AI Procedural): Haptics will be generated by the target AI system based on the same audio–visual and narrative script inputs. The order of conditions will be counterbalanced.
  • Power: A within-subjects non-inferiority margin of d = −0.30 on IPQ Spatial Presence with α = 0.05 and 1 − β = 0.80 (r ≈ 0.5 within-subject correlation) typically requires N ≈ 40–48. We will target N = 48 to allow for attrition/assumption violations.
  • Endpoints and Statistical Analysis:
    • Primary Efficacy Endpoint: The mean score on the IPQ Spatial Presence subscale. The non-inferiority margin will be set at a standardized mean difference of d = −0.3. The AI system will be considered non-inferior if the lower bound of the 95% confidence interval for the mean difference (AI minus Manual) is above −0.3 standard deviations.
    • Primary Efficiency Endpoint: Total authoring time in person-minutes. A superiority hypothesis will be tested, requiring a statistically significant reduction of at least ≥50% in authoring time for the AI system compared to the manual condition.
    • Secondary Endpoints: Scores on the Narrative Engagement Scale, Story World Absorption Scale, and the narrative comprehension quiz; perceived realism and congruence ratings; and author workload measured via the NASA-TLX.
  • Analysis: Linear mixed-effects models with participant random intercepts; fixed effects for Condition and Order; Satterthwaite df; report Hedges g and 95% CIs. Primary NI test on IPQ-SP; Holm correction for secondary outcomes. Report both ITT and per-protocol sets. Share code and preregistration (OSF).
  • Fidelity and Sharing: Manipulation checks will be included to confirm that participants perceived the haptic feedback. All stimuli, haptic patterns, source code, and anonymized data will be shared publicly alongside the publication.

4.5.3. Phase III—Closed-Loop Adaptivity and Contextual Moderators

The final phase explores the unique affordances of AI-driven systems: real-time adaptation and the systematic exploration of complex interactions.
  • Adaptive Control with User-State Feedback: A closed-loop controller will be implemented where the generative model’s output parameters (e.g., intensity, complexity, frequency of haptic events) are modulated in real-time by user biosignals. An Empatica E4 wristband will provide electrodermal activity (EDA) and heart rate (HR) data. For example, in a suspenseful scene, rising EDA could trigger the AI to increase the intensity or frequency of subtle, unsettling haptic cues. The system will be engineered to target an event-to-haptic-onset latency of ≤100 ms and a biosignal-to-adaptation latency of ≤500 ms.
  • Mapping the ‘Haptic Uncanny Valley’: A psychophysical study will be conducted to investigate the perceptual boundary where increased haptic complexity becomes distracting or unnatural. An AI model will generate a haptic effect (e.g., the sensation of footsteps on gravel) and systematically vary its complexity (e.g., number of unique vibration primitives, temporal randomness) along a continuum. Participants will rate each stimulus for perceived realism and distraction. The goal is to model the psychometric function and identify the ‘sweet spot’ where realism is maximized before distraction begins to increase, analogous to the uncanny valley in visual character rendering.
  • Ethical Reporting and Safety: All studies in this phase will include rigorous reporting on calibration procedures, measured latency, sickness screening protocols, and explicit stop rules. Participants will be provided with full control over haptic intensity limits and a clear opt-out mechanism. All adverse events will be systematically recorded and reported. Enforce per-actuator max intensity and duty-cycle ceilings; EMS uses medically safe pulse widths/frequencies; adaptive controllers cannot increase intensity faster than X units/s; include an on-controller “panic stop”.

4.6. Broader Implications and Limitations

Successfully carrying out this research agenda has important implications beyond academic investigation. Validated AI-powered haptic authoring tools could level the haptic design playing field, allowing independent creators, small studios, and educators to add rich haptic feedback to their VR experiences without the already prohibitive cost of manual authoring. This could in turn speed up the creation of more intriguing and emotionally engaging narratives, training simulations, and educational content. Even for commercial-scale endeavors like MMORPGs or dynamic, branching, cinematic VR, procedural generation is not simply an efficiency tool but likely a necessary enabling technology.
This review synthesized only peer-reviewed user studies in English to ensure archival quality and comparability; to surface emerging work, we additionally ran a separate grey-literature scan (arXiv; CHI/SIGGRAPH workshops/demos) on 7 September 2025, which we summarize descriptively but do not include in the evidence synthesis. To mitigate database and language bias we searched six major databases, performed backward/forward citation chasing, and used a multi-pass de-duplication procedure; full PRISMA-S search strings and filters are provided in Appendix A. A quantitative meta-analysis was not feasible due to heterogeneity of designs and outcomes and incomplete reporting of summary statistics in several primary studies; instead, we report per-study quantitative results where available and a direction-of-effect tally (e.g., presence improved in 6/8 studies; immersion in 3/3). With only ten included studies, formal small-study/publication-bias tests (e.g., funnel plots, Egger’s test) were underpowered; nevertheless, publication bias remains possible. This review was not preregistered; registries commonly used in health sciences (e.g., PROSPERO) do not cover HCI/VR reviews. To preserve transparency and replicability, we finalized the protocol a priori, adhered to it, and provide complete eligibility criteria, screening/extraction procedures (with independent verification), and all search strategies.
Ethical and inclusive design implications: Adaptive haptics should account for large individual differences in tactile sensitivity and possible over-stimulation, particularly for users with sensory disabilities or heightened sensitivity. Standards for vibrotactile perception thresholds (e.g., ISO 13091-1 [46]) and calibration routines can reduce risk by tailoring intensity envelopes to the individual. Adopting ability-based design principles ensures that capabilities (not limitations) drive the interaction design and that users can opt-out or attenuate haptics dynamically [46,47]. Researchers should report safety limits, screening criteria, and accessibility accommodations alongside outcomes to support reproducibility and inclusion.
Validated narrative engagement/comprehension instruments were rarely used across the corpus; future studies should adopt established scales such as the Narrative Engagement Scale [48], the Transportation Scale (or TS-SF short form), and the Story World Absorption Scale (SWAS), alongside brief, content-keyed comprehension quizzes.

5. Conclusions

Our systematic review makes two main contributions to the field of immersive VR storytelling. Firstly, it integrates the existing empirical evidence from 10 user studies, finding consistent and robust support that manually authored haptic feedback significantly improves presence, immersion, and emotional engagement. The evidence also indicates that the quality of this improvement is correlated with the fidelity, congruence, and narrative specificity of the haptic design, setting an unambiguous “gold standard” for effective haptic storytelling.
Second, and most crucially, this review formally surfaces a major and unaddressed gap in the literature: there is not a single experimental user study evaluating the efficacy of AI-powered procedural haptic generation. This underscores a crucial discrepancy between the well-established authoring bottleneck that restricts the scalability of manual methods and the absence of empirical verification for suggested automated solutions to this bottleneck. To fill this gap, a structured, three-phase research roadmap is proposed, commencing with the development of foundational tools and metrics, progressing to direct comparative efficacy studies against manual benchmarks, and culminating in nuanced investigations of contextual and perceptual effects. In pursuing this roadmap, the research community can go beyond theoretical promise to begin to empirically validate intelligent systems that can make rich, adaptive, and scalable haptic narratives a reality.
We propose releasing an open, logging-first haptics toolkit with a minimal event ↔ haptic dataset schema, then running preregistered, non-inferiority studies that compare AI/procedural vs. expert manual authoring on (a) presence/immersion/affect and narrative comprehension and (b) authoring time. The third phase should add closed-loop controllers driven by user state (HR/EDA/gaze), with safety, latency, and calibration reporting. This sequence directly operationalizes our roadmap and enables cumulative, comparable progress. To enable future meta-analysis, we encourage authors to report means/SDs (or medians/IQRs), exact p-values, and validated narrative engagement/comprehension scores alongside presence/immersion measures.

Author Contributions

Conceptualization, Z.J.S. and V.P.; methodology, Z.J.S. and V.P.; literature search and screening, Z.J.S. and V.P.; data extraction, Z.J.S.; verification, V.P.; formal analysis/synthesis, Z.J.S.; visualization, Z.J.S.; writing—original draft preparation, Z.J.S.; writing—review and editing, Z.J.S. and V.P.; supervision, V.P.; project administration, Z.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. PRISMA-S Search Strategies

Information sources: IEEE Xplore; ACM Digital Library; Scopus; Web of Science Core Collection; PubMed; PsycINFO.
Date last searched: 31 July 2025 (Asia/Al Khobar).
Grey-literature scoping final date: 7 September 2025 (Asia/Al Khobar).
Concept blocks (used consistently across sources):
  • VR/storytelling → (“virtual reality” OR vr OR immersive) AND (storytelling OR narrative OR “interactive drama”)
  • Haptics → (haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind OR “affective haptics” OR “haptic storytelling”)
  • AI/procedural → (procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic)
  • Outcomes → (immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”)
Limits (applied where supported): Language = English; Document type = peer-reviewed journal articles and conference proceedings; Humans (biomedical databases); No start date (from inception) to 31 July 2025.
De-duplication: Zotero 6 (multi-pass: exact DOI; normalized title + first-author + year; fuzzy title with manual inspection; version policy for preprint vs. archival).
Note: Grey-literature scoping (arXiv; CHI/SIGGRAPH workshops/demos) used the same concept blocks; final date 7 September 2025 but is reported separately (Appendix B, Table A1) and not included in PRISMA counts.

Appendix A.2. Scopus (Elsevier)—TITLE-ABS-KEY

(TITLE-ABS-KEY(("virtual reality" OR vr OR immersive) AND (storytelling OR narrative OR "interactive drama")))
AND
(TITLE-ABS-KEY(haptic* OR vibrotactile OR thermal OR "force feedback" OR kinesthetic OR tactile OR "electrical muscle stimulation" OR EMS OR wind))
AND
(TITLE-ABS-KEY(procedural OR generative OR "procedurally generated" OR "artificial intelligence" OR ai OR "machine learning" OR algorithmic)) AND
(TITLE-ABS-KEY(immersion OR presence OR engagement OR embodiment OR emotion* OR "user experience" OR ux OR comprehension OR transportation OR absorption OR "story world absorption"))
Filters: Language = English; Document type = Article, Conference Paper; Date ≤ 31 July 2025.

Appendix A.3. Web of Science Core Collection

TS = ((“virtual reality” OR vr OR immersive) AND (storytelling OR narrative OR “interactive drama”))
AND TS = (haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind)
AND TS = (procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic)
AND TS = (immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”)
Filters: Document Types = Article, Proceedings Paper; Languages = English; Timespan ≤ 31 July 2025.

Appendix A.4. PubMed (NIH)—[Tiab]

(((“virtual reality”[tiab] OR vr[tiab] OR immersive[tiab]) AND
(storytelling[tiab] OR narrative[tiab] OR “interactive drama”[tiab])) AND
(haptic*[tiab] OR vibrotactile[tiab] OR thermal[tiab] OR “force feedback”[tiab] OR kinesthetic[tiab] OR tactile[tiab] OR “electrical muscle stimulation”[tiab] OR EMS[tiab] OR wind[tiab]) AND
(procedural[tiab] OR generative[tiab] OR “procedurally generated”[tiab] OR “artificial intelligence”[tiab] OR ai[tiab] OR “machine learning”[tiab] OR algorithmic[tiab]) AND
(immersion[tiab] OR presence[tiab] OR engagement[tiab] OR embodiment[tiab] OR emotion*[tiab] OR “user experience”[tiab] OR ux[tiab] OR comprehension[tiab] OR transportation[tiab] OR absorption[tiab] OR “story world absorption”[tiab]))
Filters: English; Humans; Publication dates ≤ 31 July 2025.

Appendix A.5. PsycINFO (EBSCOhost Syntax; TI, AB)

TI,AB((“virtual reality” OR vr OR immersive) AND (storytelling OR narrative OR “interactive drama”))
AND TI,AB(haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind)
AND TI,AB(procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic)
AND TI,AB(immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”)
Filters: English; Peer-reviewed; Publication type = Journal Article, Conference Proceeding; Date ≤ 31 July 2025.

Appendix A.6. IEEE Xplore (Advanced Search—All Metadata)

((“virtual reality” OR vr OR immersive)
((“virtual reality” OR vr OR immersive)
AND (storytelling OR narrative OR “interactive drama”)
AND (haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind)
AND (procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic)
AND (immersion OR presence OR engagement OR embodiment OR emotion OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”))
Filters: Content Type = Journals & Conferences; Language = English; Publication Year ≤ 2025.

Appendix A.7. ACM Digital Library (Advanced Search—Title/Abstract/Author Keywords)

((Title:(“virtual reality” OR vr OR immersive) OR Abstract:(“virtual reality” OR vr OR immersive) OR Keywords:(“virtual reality” OR vr OR immersive))
AND (Title:(storytelling OR narrative OR “interactive drama”) OR Abstract:(storytelling OR narrative OR “interactive drama”) OR Keywords:(storytelling OR narrative OR “interactive drama”))
AND (Title:(haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind) OR Abstract:(haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind) OR Keywords:(haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind))
AND (Title:(procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic) OR Abstract:(procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic) OR Keywords:(procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic))
AND (Title:(immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”) OR Abstract:(immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”) OR Keywords:(immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”)))

Appendix B. Filters: Publication Type = Proceedings, Journals; Language = English; Publication Date ≤ 31 July 2025

Grey-Literature Scoping (AI-Powered Procedural Haptics; Final Search: 7 September 2025)

Table A1. Descriptive list of preprints and workshop/demo papers related to AI-based haptic generation relevant to narrative VR. Items were not synthesized or appraised and do not affect PRISMA counts.
Table A1. Descriptive list of preprints and workshop/demo papers related to AI-based haptic generation relevant to narrative VR. Items were not synthesized or appraised and do not affect PRISMA counts.
PaperYearVenue/Source (arXiv, CHI Workshop, SIGGRAPH Demo, Etc.)Type (Preprint, Workshop Paper, Demo, Poster)AI Approach (Rule-Based, ML, LLM, GAN, Other)Haptic Method (Vibrotactile, Thermal, EMS, Force-Feedback, Other)VR Narrative Context (Yes/No; Brief)User Study with Human Participants? (Y/N)Why Excluded from Synthesis (No User Study, Not Narrative VR, Non-Synchronous, Other)Notes
Sung et al. [40]2025Project page/preprintPreprintGenerative model (text → audio → vibration)VibrotactileNo; general haptic generationY (A/B, N = 32)Not narrative VR user studyGenerates vibrotactile from text prompts; design efficiency focus
Ren & Belpaeme [41]2025arXivPreprintLLMVibrotactileNo; affective patterns, not narrativeY (N = 32)Not narrative VREmotion/gesture recognition with LLM-generated tactile patterns
Khan et al. [42]2025arXivPreprintDeep learning (CNN/VLM)Vibrotactile; ThermalNo; material/temperature inferenceNNo user study; not narrative VRInfers material & temperature to drive haptics
Dong [43]2025Open Research Repository (OCADU) [43]Project / thesisLLM + speech-driven generationTactile (unspecified)Yes; personal storytelling MRN (RTD prototypes)No user studyResearch Through Design; early prototypes
Jingu et al. [49]2025arXivPreprintLLM + physical modelingVibrotactilePossibly; full VR scenes (not user-evaluated)NNo user studyPipeline-level proposal; technical evaluation only
Kishor et al. [50]2025ResearchGatePreprint/white paperReinforcement learning (DRL), othersForce feedbackNo; rehab domainN (technical/clinical claims)Not narrative VR; no user studyDomain transfer from rehab; non-archival
Gehrke et al. [51]2025arXivPreprintReinforcement learning (RL)Unspecified (adaptive XR haptics)No; adaptive XR genericY (small N)Not narrative VRCompares RL from ratings vs. EEG; generic XR
Regimbal & Cooperstock [52]2024EuroHaptics 2024 (workshop/demo) [52]Workshop/demo paperReinforcement learning (RL)Force feedbackNo; co-creative design toolY (qualitative, N = 5)Not narrative VR; qualitative onlyCo-creation with RL agent; exploratory
Heravi et al. [53]2024arXivPreprintDeep learning (action-conditional)VibrotactileNo; texture renderingY (perceptual)Not narrative VRPerceptual comparison vs. SOTA; non-narrative
Hernández et al. [54]2025ResearchGatePreprintGraph neural network (GNN)Force feedbackNo; physically consistent renderingNNo user study; not narrative VRTechnical validation; non-archival

Appendix C

Appendix C.1. MMAT (2018) [31] Item-Level Tallies by Study Design

Tool: MMAT v2018 [31]. “Yes/CT/No” = number of included studies judged Yes/Can’t tell/No.

Appendix C.2. Screening (Applies to All Designs; n = 10)

S1.
Clear research questions—10/0/0
S2.
Data allow answering RQs—10/0/0

Appendix C.3. Quantitative Non-Randomized (n = 9)

3.1
Representative of target population—0/0/9
3.2
Appropriate/validated measurements—8/1/0
3.3
Complete outcome data—9/0/0
3.4
Confounders accounted (design/analysis)—5/3/1
3.5
Intervention administered as intended—9/0/0

Appendix C.4. Mixed-Methods (n = 1)

5.1
MM design appropriate—1/0/0
5.2
Integration of strands relevant—1/0/0
5.3
Interpretation of integration adequate—1/0/0
5.4
Divergences/congruences addressed—0/1/0
5.5
Component quality adequate—1/0/0
Abbrev.: CT = Can’t tell [31].

References

  1. Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
  2. Israr, A.; Zhao, S.; Schwalje, K.; Klatzky, R.; Lehman, J. Feel Effects: Enriching Storytelling with Haptic Feedback. ACM Trans. Appl. Percept. 2014, 11, 17. [Google Scholar] [CrossRef]
  3. Bolanowski, S.J., Jr.; Gescheider, G.A.; Verrillo, R.T.; Checkosky, C.M. Four Channels Mediate the Mechanical Aspects of Touch. J. Acoust. Soc. Am. 1988, 84, 1680–1694. [Google Scholar] [CrossRef]
  4. Burdea, G.C. Keynote Address: Haptic Feedback for Virtual Reality. In Proceedings of the International Workshop on Virtual Prototyping, Laval, France, 17–29 May 1999; pp. 87–96. Available online: https://www.scirp.org/reference/referencespapers?referenceid=3068290 (accessed on 31 July 2025).
  5. Gibson, J.J. The Senses Considered as Perceptual Systems; Reprinted; Greenwood Press: Westport, CT, USA, 1983; ISBN 978-0-313-23961-8. [Google Scholar]
  6. Eid, M.A.; Al Osman, H. Affective Hapti cs: Current Research and Future Directions. IEEE Access 2016, 4, 26–40. [Google Scholar] [CrossRef]
  7. Pacchierotti, C.; Sinclair, S.; Solazzi, M.; Frisoli, A.; Hayward, V.; Prattichizzo, D. Wearable Haptic Systems for the Fingertip and the Hand: Taxonomy, Review, and Perspectives. IEEE Trans. Haptics 2017, 10, 580–600. [Google Scholar] [CrossRef]
  8. Slater, M.; Wilbur, S. A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence Teleoper. Virtual Environ. 1997, 6, 603–616. [Google Scholar] [CrossRef]
  9. Jerald, J. The VR Book: Human-Centered Design for Virtual Reality; ACM Books; Association for Computing Machinery and Morgan & Claypool: New York, NY, USA; San Rafael, CA, USA, 2016; ISBN 978-1-970001-12-9. [Google Scholar]
  10. Witmer, B.G.; Singer, M.J. Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence Teleoper. Virtual Environ. 1998, 7, 225–240. [Google Scholar] [CrossRef]
  11. Lombard, M.; Ditton, T. At the Heart of It All: The Concept of Presence. J. Comput. Mediat. Commun. 1997, 3, JCMC321. [Google Scholar] [CrossRef]
  12. Slater, M. Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
  13. Zwaan, R.A.; Langston, M.C.; Graesser, A.C. The Construction of Situation Models in Narrative Comprehension: An Event-Indexing Model. Psychol. Sci. 1995, 6, 292–297. [Google Scholar] [CrossRef]
  14. Hecquard, J.; Saint-Aubert, J.; Argelaguet, F.; Pacchierotti, C.; Lécuyer, A.; Macé, M. Fostering Empathy in Social Virtual Reality through Physiologically Based Affective Haptic Feedback. In Proceedings of the 2023 IEEE World Haptics Conference (WHC), Delft, The Netherlands, 10–13 July 2023; pp. 78–84. [Google Scholar]
  15. Aylett, R. Narrative in Virtual Environments—Towards Emergent Narrative. In AAAI Technical Report FS-99-01; AAAI Press: Menlo Park, CA, USA, 1999; Available online: https://www.researchgate.net/publication/266211095_Narrative_in_Virtual_Environments_-Towards_Emergent_Narrative (accessed on 31 July 2025).
  16. Clepper, G.; Gopinath, A.; Martinez, J.S.; Farooq, A.; Tan, H.Z. A Study of the Affordance of Haptic Stimuli in a Simulated Haunted House. In Proceedings of the HCII 2022, Lecture Notes in Computer Science (LNCS), Virtual, 26 June–1 July 2022; Springer: Cham, Switzerland, 2022; Volume 13321, pp. 182–197. [Google Scholar]
  17. Khamis, M.; Schuster, N.; George, C.; Pfeiffer, M. ElectroCutscenes: Realistic Haptic Feedback in Cutscenes of Virtual Reality Games Using Electric Muscle Stimulation. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology (VRST ’19), Parramatta, NSW, Australia, 12–15 November 2019; ACM: New York, NY, USA, 2019; p. 10. [Google Scholar]
  18. Desnoyers-Stewart, J.; Bergamo Meneghini, M.; Stepanova, E.R.; Riecke, B.E. Real Human Touch: Performer-Facilitated Touch Enhances Presence and Embodiment in Immersive Performance. Front. Virtual Real. 2024, 4, 1336581. Available online: https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.1336581 (accessed on 31 July 2025). [CrossRef]
  19. Togelius, J.; Yannakakis, G.N.; Stanley, K.O.; Browne, C. Search-Based Procedural Content Generation: A Taxonomy and Survey. IEEE Trans. Comput. Intell. AI Games 2011, 3, 172–186. [Google Scholar] [CrossRef]
  20. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  21. PROSPERO. About PROSPERO/What Is Eligible. National Institute for Health and Care Research. Available online: https://www.crd.york.ac.uk/prospero/ (accessed on 13 September 2025).
  22. OSF Registries. Register Your Research. Available online: https://www.cos.io/products/osf-registries (accessed on 13 September 2025).
  23. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
  24. Sierra Rativa, A.; Postma, M.; van Zaanen, M. Can Virtual Reality Act as an Affective Machine? The Wild Animal Embodiment Experience and the Importance of Appearance. In Proceedings of the MIT LINC 2019 Conference, EPiC Series in Education Science. Cambridge, MA, USA, 18–20 June 2019; pp. 214–223. [Google Scholar]
  25. García-Valle, G.; Ferre, M.; Breñosa, J.; Vargas, D. Evaluation of Presence in Virtual Environments: Haptic Vest and User’s Haptic Skills. IEEE Access 2018, 6, 7224–7233. [Google Scholar] [CrossRef]
  26. Ooms, S.; Lee, M.; Stepanova, E.R.; Cesar, P.; El Ali, A. Haptic Biosignals Affect Proxemics Toward Virtual Reality Agents. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), Yokohama, Japan, 26 April–1 May 2025; ACM: New York, NY, USA, 2025; p. 18. [Google Scholar]
  27. Krogmeier, C.; Mousas, C.; Whittinghill, D. Human, Virtual Human, Bump! A Preliminary Study on Haptic Feedback. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 1032–1033. [Google Scholar]
  28. Ranasinghe, N.; Jain, P.; Nguyen, T.N.T.; Koh, K.C.R.; Tolley, D.; Karwita, S.; Lin, L.-Y.; Yan, L.; Shamaiah, K.; Chow, E.W.T.; et al. Season Traveller: Multisensory Narration for Enhancing the Virtual Reality Experience. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montréal, QC, Canada, 21–26 April 2018; ACM: Montreal, QC, Canada, 2018; pp. 1–13. [Google Scholar]
  29. Prasanth, S. Haptic Contact in Immersive 360° Cinematic Environment. Master’s Thesis, University of Canterbury, HIT Lab NZ, College of Engineering, Christchurch, New Zealand, February 2018. [Google Scholar]
  30. Lakens, D. Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A Practical Primer for t-Tests and ANOVAs. Front. Psychol. 2013, 4, 863. [Google Scholar] [CrossRef]
  31. Hong, Q.N.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B.; O’Cathain, A.; et al. The Mixed Methods Appraisal Tool (MMAT) Version 2018 for Information Professionals and Researchers. Educ. Inf. 2018, 34, 285–291. [Google Scholar] [CrossRef]
  32. van Erp, J.B.F.; Toet, A. Social Touch in Human–Computer Interaction. Front. Digit. Humanit. 2015, 2, 2. Available online: https://www.frontiersin.org/journals/digital-humanities/articles/10.3389/fdigh.2015.00002 (accessed on 31 July 2025). [CrossRef]
  33. Schubert, T.; Friedmann, F.; Regenbrecht, H. The Experience of Presence: Factor Analytic Insights. Presence Teleoperators Virtual Environ. 2001, 10, 266–281. [Google Scholar] [CrossRef]
  34. Lessiter, J.; Freeman, J.; Keogh, E.; Davidoff, J. A Cross-Media Presence Questionnaire: The ITC-Sense of Presence Inventory. Presence Teleoper. Virtual Environ. 2001, 10, 282–297. [Google Scholar] [CrossRef]
  35. Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and Defining the Experience of Immersion in Games. Int. J. Hum. Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
  36. Green, M.C.; Brock, T.C. The Role of Transportation in the Persuasiveness of Public Narratives. J. Pers. Soc. Psychol. 2000, 79, 701–721. [Google Scholar] [CrossRef]
  37. Appel, M.; Gnambs, T.; Richter, T.; Green, M.C. The Transportation Scale–Short Form (TS–SF). Media Psychol. 2015, 18, 243–266. [Google Scholar] [CrossRef]
  38. Kuijpers, M.M.; Hakemulder, F.; Tan, E.S.; Doicaru, M.M. Exploring Absorbing Reading Experiences: Developing and Validating a Self-Report Scale to Measure Story World Absorption. Sci. Study Lit. 2014, 4, 89–122. [Google Scholar] [CrossRef]
  39. Bradley, M.M.; Lang, P.J. Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  40. Sung, Y.; John, K.; Yoon, S.H.; Seifi, H. HapticGen: Generative Text-to-Vibration Model for Streamlining Haptic Design. 2025. Available online: https://hapticgen.hcitech.org/static/pdfs/paper.pdf (accessed on 7 September 2025).
  41. Ren, Q.; Belpaeme, T. Touched by ChatGPT: Using an LLM to Drive Affective Tactile Interaction. arXiv 2025, arXiv:2501.07224. Available online: https://arxiv.org/html/2501.07224v1 (accessed on 7 September 2025). [CrossRef]
  42. Khan, M.H.; Altamirano Cabrera, M.; Iarchuk, D.; Mahmoud, Y.; Trinitatova, D.; Tokmurziyev, I.; Tsetserukou, D. HapticVLM: VLM-Driven Texture Recognition Aimed at Intelligent Haptic Interaction. arXiv 2025, arXiv:2505.02569. Available online: https://arxiv.org/html/2505.02569v1 (accessed on 7 September 2025).
  43. Dong, A. LUMIEA: Enhancing User Engagement in Storytelling: Empowering Personal Narratives through AI-Generated Environments and Tactile Interaction in Mixed Reality. Master’s Thesis, OCAD University Open Research Repository, Toronto, ON, Canada, 2025. [Google Scholar]
  44. Schuemie, M.J.; van der Straaten, P.; Krijn, M.; van der Mast, C.A.P.G. Research on Presence in Virtual Reality: A Survey. Cyberpsychol. Behav. 2001, 4, 183–201. [Google Scholar] [CrossRef]
  45. Kilteni, K.; Groten, R.; Slater, M. The Sense of Embodiment in Virtual Reality. Presence Teleoper. Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef]
  46. ISO (2001/2011) ISO 13091-1; Mechanical Vibration—Vibrotactile Perception Thresholds. International Organization for Standardization: Geneva, Switzerland. Available online: https://www.iso.org/popular-standards.html (accessed on 3 October 2025).
  47. Wobbrock, J.O.; Kane, S.K.; Gajos, K.Z.; Harada, S.; Froehlich, J. Ability-Based Design: Concept, Principles and Examples. ACM Trans. Access. Comput. 2011, 3, 1–27. [Google Scholar] [CrossRef]
  48. Busselle, R.; Bilandzic, H. Measuring Narrative Engagement. Media Psychol. 2009, 12, 321–347. [Google Scholar] [CrossRef]
  49. Jingu, A.; Strohmeier, P.; AliAbbasi, E.; Steimle, J. Scene2Hap: Combining LLMs and Physical Modeling for Automatically Generating Vibrotactile Signals for Full VR Scenes. arXiv 2025, arXiv:2504.19611. Available online: https://arxiv.org/pdf/2504.19611 (accessed on 7 September 2025). [CrossRef]
  50. Kishor, I.; Goyal, P.; Gantla, H.; Goyal, P.; Mamodiya, U. AI-Driven Haptic Technologies Revolutionizing Patient Rehabilitation. In Integrating AI With Haptic Systems for Smarter Healthcare Solutions; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 47–84. [Google Scholar]
  51. Gehrke, L.; Koselevs, A.; Klug, M.; Gramann, K. Neuroadaptive Haptics: Comparing Reinforcement Learning from Explicit Ratings and Neural Signals for Adaptive XR Systems. arXiv 2025, arXiv:2504.15984. Available online: https://arxiv.org/html/2504.15984v2 (accessed on 7 September 2025). [CrossRef]
  52. Regimbal, J.; Cooperstock, J.R. Investigating Haptic Co-creation with Reinforcement Learning. In Haptics: Understanding Touch; Technology and Systems; Applications and Interaction; Kajimoto, H., Lopes, P., Pacchierotti, C., Basdogan, C., Gori, M., Lemaire-Semail, B., Marchal, M., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2024; Volume 14769, pp. 448–454. [Google Scholar] [CrossRef]
  53. Heravi, N.; Culbertson, H.; Okamura, A.M.; Bohg, J. Development and Evaluation of a Learning-Based Model for Real-time Haptic Texture Rendering. arXiv 2024, arXiv:2212.13332. Available online: https://arxiv.org/html/2212.13332v3 (accessed on 7 September 2025). [CrossRef]
  54. Hernández, Q.; Martins, P.; Tesan, L.; Alfaro, I.; González, D.; Chinesta, F.; Cueto, E. A Neural Network Architecture for Physically-Consistent Haptic Rendering. Virtual Real. 2025, 29, 114. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow of study selection.
Figure 1. PRISMA 2020 flow of study selection.
Mti 10 00009 g001
Figure 2. Distribution of Haptic Modalities Across Included Studies. The chart displays the number of studies (N = 10) that utilized each type of haptic technology. Note that some studies employed multiple modalities.
Figure 2. Distribution of Haptic Modalities Across Included Studies. The chart displays the number of studies (N = 10) that utilized each type of haptic technology. Note that some studies employed multiple modalities.
Mti 10 00009 g002
Figure 3. Frequency of Measured User Experience Outcomes Across Included Studies (N = 10). The chart illustrates the number of studies that assessed each of the four primary UX outcomes targeted by this review, highlighting the scarcity of research on narrative comprehension.
Figure 3. Frequency of Measured User Experience Outcomes Across Included Studies (N = 10). The chart illustrates the number of studies that assessed each of the four primary UX outcomes targeted by this review, highlighting the scarcity of research on narrative comprehension.
Mti 10 00009 g003
Figure 4. Proportion of studies with significant positive effects by outcome with four bars: Presence (4/8), Immersion (3/3), Emotion (1/5 positive; annotate mixed), Comprehension (0/0; N/A). (Counts reflect study-level summaries in Table 3; where p-values were not reported, studies were not counted as significant.).
Figure 4. Proportion of studies with significant positive effects by outcome with four bars: Presence (4/8), Immersion (3/3), Emotion (1/5 positive; annotate mixed), Comprehension (0/0; N/A). (Counts reflect study-level summaries in Table 3; where p-values were not reported, studies were not counted as significant.).
Mti 10 00009 g004
Figure 5. Roadmap for intelligent haptic storytelling. Phase I builds data/tools/metrics; Phase II tests AI/procedural vs. manual authoring (with no-haptics baseline); Phase III implements closed-loop adaptivity and probes contextual moderators and safety.
Figure 5. Roadmap for intelligent haptic storytelling. Phase I builds data/tools/metrics; Phase II tests AI/procedural vs. manual authoring (with no-haptics baseline); Phase III implements closed-loop adaptivity and probes contextual moderators and safety.
Mti 10 00009 g005
Table 1. Study characteristics and quantitative outcomes (n = 10).
Table 1. Study characteristics and quantitative outcomes (n = 10).
PaperNDesignNarrative ContextHaptic Intervention (Type/Placement)Outcomes & MeasuresMain Quantitative Result (Direction + p/ES)Effect Size (Standardized)
Clepper et al. [16]31Within-subjects (experimental)Haunted house séanceVibrotactile palm-based arrayImmersion (adjective ratings)Unique, handcrafted signals > generic multiplexed signal (↑ immersion), p < 0.05— (insufficient)
Sierra Rativa et al. [24] 51Between-subjects (experimental)Wild animal embodiment (distress event)Vibrotactile haptic vestImmersion; Empathy; Perceived pain (custom scales)Natural body appearance ↑ immersion (p < 0.05); perceived pain correlated with empathy (+)— (insufficient)
Khamis et al. [17]22Repeated measures (within-subjects)VR game cutscenesEMS (arm electrodes) + vibrotactilePresence, Realism (IPQ)EMS > vibrotactile & no haptics for presence (p < 0.01) and realism (p < 0.05)— (insufficient)
García Valle et al. [25]23ExperimentalPost-explosion train stationHaptic vest (tactile + thermal)Presence, Realism (PQ)Haptic vest ↑ presence & realism; p < 0.05— (insufficient)
Hecquard et al. [14]38Within-subjects (experimental)Social VR talk with stressed presenterVibrotactile + pressure (wristband, belt); physiologically mappedEmpathy, Presence, Anxiety, Engagement (IPQ; prefs)Sympathetic haptics ↑ empathy (p < 0.001); presence NS; order effects noted— (insufficient)
Ooms et al. [26]31Within-subjects (experimental)Close encounters with virtual agentsVibrotactile + thermal (controller, custom device)Arousal, Discomfort, Presence (IPQ; ratings)Vibrotactile heartbeats ↑ perceived arousal & discomfort; presence NS (p NR)— (p NR)
Krogmeier et al. [27]8Between-subjects (experimental)Urban crosswalk collisionVibrotactile haptic vest (virtual human ‘bump’)Presence (SUS), Embodiment, Arousal (GSR)Haptics ↑ presence (p = 0.004) & embodiment (p = 0.005); GSR NSr ≈ 0.98 (presence; from t(3) = 8.0)
Desnoyers Stewart et al. [18]108Mixed-methods, experimentalImmersive dance performancePerformer-facilitated touch; physical propPresence (TPI), Embodiment, Affect, Narrative engagementFull human touch ↑ presence, embodiment, valence, arousal (p NR)— (p NR)
Ranasinghe et al. [28]20ExperimentalHot air balloon journey through seasonsCustom HMD add-on: thermal, wind, olfactoryPresence (PQ), Engagement (GEQ), Arousal (EDA, HR)Multisensory > AV-only for presence & engagement; reduced EDA (frustration); p NR— (p NR)
Sasikumar et al. [29]322 × 2 factorial (experimental)360° cinematic battle sceneWearable vest; non-wearable wind/floor vibrationsPresence (IPQ), ImmersionHaptics ↑ spatial presence (p < 0.001) & realism (p < 0.05); ↑ overall experience— (insufficient)
Study characteristics and quantitative outcomes: For each included study: design, narrative context, haptic intervention, outcomes/measures, and the main quantitative result (direction with p-value or effect size where reported). Notes: ES = effect size; p NR = p-value not reported.
Table 2. Grey-literature snapshot (non-synthesized).
Table 2. Grey-literature snapshot (non-synthesized).
Item (Short)Venue/TypeAI ApproachHaptic MethodNarrative VR?User Study?Why Excluded
Sung et al. [40]CHI ’25 (peer-reviewed)Generative (text → vibration)VibrotactileNoYes (non-narrative)Not narrative VR
Ren & Belpaeme [41]Preprint (arXiv)LLM-driven patternsVibrotactile (wearable sleeve)NoYes (non-narrative)Not narrative VR
Khan et al. [42]Preprint (arXiv)VLM + CNN (vision → tactile/thermal)Vibrotactile & thermalNoNo (system eval only)No VR/narrative user study
Dong [43]Thesis/Exhibition (OCAD U)LLM + speechTactile (unspecified)Yes (MR)NoNo user study
See Appendix B, Table A1 for full fields.
Table 3. Qualitative synthesis of user-experience outcomes (presence, immersion, affect, narrative comprehension) across included studies.
Table 3. Qualitative synthesis of user-experience outcomes (presence, immersion, affect, narrative comprehension) across included studies.
PaperImmersionPresenceNarrative ComprehensionEmotional Engagement
Clepper et al. [16]Unique, handcrafted signals rated as more immersive than a generic, multiplexed signal.Not MeasuredNot MeasuredNot Measured
Sierra Rativa et al. [24]Natural body appearance led to higher immersion (p < 0.05). Immersion correlated with perceived pain and empathy.Not MeasuredNot MeasuredPerceived pain from haptic vest correlated positively with dispositional empathy.
Khamis et al. [17]EMS led to higher realism and involvement scores on IPQ (p < 0.05).EMS significantly increased spatial presence and sense of “being there” compared to vibrotactile and no haptics (p < 0.01).Not MeasuredNot Measured
García-Valle et al. [25]Not MeasuredHaptic vest improved user-reported sense of presence and realism.Not MeasuredNot Measured
Hecquard et al. [14]Not MeasuredNo significant difference between haptic conditions, but order effect observed.“Time on presenter” measured as a proxy for engagement.Sympathetic haptic feedback was significantly preferred for fostering empathy (p < 0.001).
Ooms et al. [26]Not MeasuredNo significant difference in IPQ scores across haptic conditions.Not MeasuredVibrotactile heartbeats significantly increased perceived arousal and discomfort.
Krogmeier et al. [27]Not MeasuredHaptic feedback significantly increased presence scores (p = 0.004).Not MeasuredNo significant difference in objective GSR arousal with small sample.
Ranasinghe et al. [28]Multisensory (including haptics) configuration led to highest engagement scores on GEQ.Any added sensory modality (wind, thermal) improved presence over AV-only. All modalities combined (ST) provided the highest presence.Not MeasuredMultisensory ST configuration reduced EDA (frustration) and stabilized HR, suggesting improved emotional state.
Sasikumar et al. [29]Haptic feedback significantly increased overall experience scores, a proxy for immersion (p < 0.01).Both wearable and non-wearable haptics significantly increased spatial presence (p < 0.001) and realism (p < 0.05).Not MeasuredNot Measured
Abbreviations: IPQ = Igroup Presence Questionnaire; PQ = Presence Questionnaire; TPI = Theater Presence Inventory; GEQ = Game Experience Questionnaire; EMS = electrical muscle stimulation; EDA = electrodermal activity; HR = heart rate; GSR = galvanic skin response.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perumal, V.; Shah, Z.J. AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technol. Interact. 2026, 10, 9. https://doi.org/10.3390/mti10010009

AMA Style

Perumal V, Shah ZJ. AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technologies and Interaction. 2026; 10(1):9. https://doi.org/10.3390/mti10010009

Chicago/Turabian Style

Perumal, Vimala, and Zeeshan Jawed Shah. 2026. "AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review" Multimodal Technologies and Interaction 10, no. 1: 9. https://doi.org/10.3390/mti10010009

APA Style

Perumal, V., & Shah, Z. J. (2026). AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technologies and Interaction, 10(1), 9. https://doi.org/10.3390/mti10010009

Article Metrics

Back to TopTop