AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review
Abstract
1. Introduction
1.1. The Sensory Frontier of Digital Narratives: Haptics in VR Storytelling
1.2. The Psychological Impact of Haptics: Presence, Embodiment, and Emotion
- Immersion: Technically defined as an objective property of a technology, immersion describes the degree to which a VR system is capable of providing the user with an illusion of coherent, enveloping sensory input and removing the user from his or her sensation of the physical world [8].
- Factors affecting immersion include the quality of displays, the number of sensory modalities that are rendered, and the system’s latency in tracking user movement [9].
- Presence: Generally defined as a subjective psychological result of immersion, presence refers to the feeling of “being there” in the virtual world [10,11]. It refers to a state of consciousness in which the manipulated sensory models of the virtual world predominate, causing the user to behave and respond as if he or she were part of the virtual world and the events occurring within it were real [12]. A high sense of presence is a chief goal of most VR applications, especially those focusing on narrative and training.
- Narrative Comprehension: The user’s cognitive ability to follow, understand, and interpret a story’s plot, character motives, and themes. According to cognitive models of discourse comprehension, narrative comprehension is the process of constructing a coherent mental “situation model” of the story world [13]. Ideally, the role of haptics would be to clarify or bolster narrative information over the course of such construction and not merely constitute a distracting sensory embellishment.
- Emotional Engagement: The valence and intensity of the user’s affective response to a narrative. Examples include empathy with characters, anxiety during suspenseful plot points, and arousal in response to story action. Touch is an inherently affective modality, which makes haptics a powerful candidate for emotional modulation [6]. Affective haptics research has demonstrated that localized patterns of tactile stimuli can be used to classify and communicate specific emotions, indicating that haptic feedback can potentially serve as a direct channel of emotional contagion between a user and a virtual character or narrative [14].
1.3. The Authoring Dilemma: From Manual Craftsmanship to Automated Generation
1.4. Rationale and Objectives
- Synthesize the empirical effects of synchronized (non-AI) haptics on presence, immersion, emotion, and narrative comprehension across study designs.
- Characterize design factors (e.g., modality, narrative congruence, timing/placement) associated with stronger outcomes.
- Establish a baseline for comparative evaluation by future AI-procedural haptics research and clearly identify gaps (e.g., missing effect sizes, scarce comprehension measures).
- Map threats to validity and reporting issues to support stronger study design and reporting standards in this emerging area [20].
2. Methods
2.1. Protocol, Registration, and Reporting Guideline
2.2. Eligibility Criteria
- Population (P): Human participants engaging with narrative virtual reality (VR) experiences. This is defined as experiences with a predefined story structure, whether linear or branching, thereby excluding purely sandbox environments or goal-oriented gameplay without a clear narrative arc.
- Intervention (I): Haptic feedback synchronized to narrative events. This includes manually authored cues (hand-scripted to specific timestamps or events) and rule-based procedural cues (e.g., direct, deterministic mapping of a character’s biosignals to haptic output). The search explicitly targeted, but did not find, studies employing adaptive, AI-driven generative models for haptic synthesis.
- Comparison (C): A no-haptics control (audio–visual only) and an alternative haptic strategy (e.g., vibrotactile vs. EMS; different timing/placement/intensity).
- Outcomes (O): At least one user-experience outcome relevant to narrative VR: presence, immersion, emotional/affective engagement, and narrative comprehension. Validated instruments or defined custom measures were accepted.
- Study design (S): Experimental, quasi-experimental, or mixed-methods user studies conducted with human participants and published in peer-reviewed journals or conference proceedings in English.
2.3. Information Sources and Search Strategy
- Exact DOI match (auto-merge) → remove exact duplicates.
- Title + first-author + year (normalized) → merge where all three match;
- Fuzzy title match (≥0.85 Jaro–Winkler) with manual inspection for near-duplicates and conference/journal pairs.
- Version policy: where a peer-reviewed version existed, corresponding preprints were retained only in the grey-literature list and excluded from PRISMA counts; where conference and journal versions existed, we retained the most complete archival version and merged metadata.
2.4. Selection and Data Extraction Process
Effect Sizes (Calculation and Reporting)
2.5. Grey-Literature Scoping: Descriptive, Non-Synthesized (Methods = How You Searched)
- Sources: arXiv (preprints); ACM CHI and SIGGRAPH workshop/demo/program pages.
- Purpose: Map preliminary AI-based haptic-generation approaches relevant to narrative VR.
- Screening & extraction: One reviewer screened records and extracted the title, venue/source, year, AI approach, haptic method, VR-narrative relevance, presence of user study (Y/N), and basic device/placement/synchronization details. A second reviewer verified entries.
- Integration: Grey-literature items were not included in the PRISMA flow or the evidence synthesis and were not quality-appraised. A brief descriptive summary appears in Results (Section 3.2), and full entries are listed in Appendix B (Table A1). The grey-literature scoping was conducted on 7 September 2025 (Asia/Al Khobar).
2.6. Study-Level Quality Appraisal
3. Results
3.1. Overview of Included Studies
3.2. Grey-Literature Scoping: AI-Powered Procedural Haptics (Descriptive Results)
3.3. The Impact of Haptic Feedback on Immersion and Presence
3.4. Narrative Comprehension: The Overlooked Dimension
3.5. Quality Appraisal (MMAT)
- Screening (applies to all designs): S1 clear RQs = 10/10 “Yes”; S2 data address RQs = 10/10 “Yes” [31].
- Quantitative non-randomized (n = 9; MMAT 3.1–3.5):
- Mixed-methods (n = 1; MMAT 5.1–5.5, [18]): 5.1 design relevance = 1 “Yes”; 5.2 integration relevance = 1 “Yes”; 5.3 interpretation of integration = 1 “Yes”; 5.4 treatment of divergences = 1 “Can’t tell”; 5.5 component quality = 1 “Yes”.
4. Discussion
- What this review adds: (I) First PRISMA-style synthesis of user-evaluated haptics in narrative VR, spanning 10 studies and consolidating what reliably improves presence/immersion/affect. (II) An empirical baseline table (Table 1) that unifies design, hardware, synchronization, measures, and quantitative effects usable as a benchmark for future AI systems. (III) Gap formalization: 0/10 studies evaluate AI-procedural haptics with human participants in narrative VR, and 0/10 use validated narrative-comprehension instruments. (IV) An actionable roadmap with experimental design details (non-inferiority margins, authoring-time accounting, latency budgets) to move the field from theory to evidence.
- Positioning vs. prior “gap” writings: Recent position/overview pieces (e.g., domain-general AI + haptics, medical/rehab applications, and non-archival preprints) acknowledge a gap and propose directions, but they do not: (I) run a PRISMA-style search focused on narrative VR with synchronized haptics, (II) synthesize user-study outcomes, or (III) provide a head-to-head experimental plan with authoring-time endpoints. Our contribution is to establish the evidence baseline for narrative VR, quantify the absence of AI-procedural user studies, and supply a three-phase, testable program.
4.1. Contextual Factors and Haptic Design
- Narrative Congruence: Foremost among these moderators is the extent to which a particular haptic signal semantically meshes with the audio–visual events the signal attempts to contextualize. The greatest proponent of this narrative congruence account is the study by Clepper et al. [16]: handcrafted, pre-programmed haptic signals designed to directly align with particular narrative events (i.e., a croaking frog, a heartbeat) were deemed more immersive than a generic and incongruent haptic signal. The implication of this is straightforward: haptics—to even be effective—cannot be imposed blindly without semantic regard to what the story or environment imposes.
- User Experience and Expectations: The characteristics of the user, more specifically their prior experience or literacy with VR and haptic technologies, ultimately modulate their appraisal of the feedback. In perhaps the only experiment within the literature to select participants entirely based on experience with haptic technology, García-Valle et al. [25] found stark differences in feedback appraisal between seasoned “Haptic Experts” and casual “Non-Experts.” Where non-experts were generally satisfied with a given haptic signal, experts were substantially more discerning and prioritized attributes of realism and synchronization with audio–visual events. Therefore, we can extrapolate that as the general user base of VR becomes more haptically literate, there will be a greater demand for higher-fidelity and more sophisticated haptic feedback.
- Hardware and Modality: Furthermore, properties of the hardware and modality themselves naturally modulate the user experience. The diversity of modalities presented in this review—from wearable vests to EMS electrodes to non-wearable fans and vibration platforms—is symptomatic of the diversity of existing wearable haptic systems [7]. No one modality, based on our analysis, has an advantage over others; rather, different modalities are successful insofar as they match the narrative being communicated. For example, EMS proved to be highly effective at conveying kinesthetic force feedback in fast-paced action-oriented cutscenes [17], whereas thermal and wind-based modalities were ideal at setting the environmental mood [28]. Comparatively, Sasikumar et al. [29] found no significant difference between presence ratings in wearable versus non-wearable feedback systems, suggesting that how the feedback is embedded in the narrative context may ultimately matter more than the form factor of the feedback in and of itself.
4.2. Haptics as a Channel for Physiological and Emotional Contagion
4.3. The State of the Craft: Empirically Validated Principles of Manual Haptic
4.4. The Unvalidated Science: The Empirical Void in AI-Powered Procedural Haptics
4.5. A Research Roadmap for Intelligent Haptic Storytelling
4.5.1. Phase I—Data, Tools, and Metrics
- Open toolkits with data logging: An accessible, open-source authoring and runtime library, provisionally named ‘HapTale,’ will be developed for Unity engine (a widely used real-time 3D development platform for interactive media and XR). HapTale will provide a unified API abstracting low-level control for common actuators (ERM and LRA vibrotactors, Peltier-based thermal modules, and non-invasive EMS units). A critical feature will be its ‘logging-first’ architecture, which automatically timestamps and logs every narrative event (e.g., dialogue_start, event_explosion) and every delivered haptic primitive (e.g., vibration_start, actuator_id = torso_front_left, intensity = 0.8, frequency = 150 Hz, duration = 250 ms) to a standardized JSON format. This will facilitate the creation of shareable datasets.
- Standardized datasets & reporting: The project will release an initial paired event-to-haptic corpus based on several short narrative scenes. Each entry will include the event timestamp, a semantic event tag, the full haptic primitive description, and detailed metadata including HMD model, actuator model, body placement, and measured end-to-end latency. A minimal methods checklist will be published for reporting haptic studies, ensuring others can reproduce delivery timing and hardware context.
- Validated outcome battery: To address the critical gap in cognitive assessment, all subsequent studies in this roadmap must include the following core battery of validated instruments; Presence: The Igroup Presence Questionnaire (IPQ); Narrative Engagement: The Narrative Engagement Scale; Story World Absorption: The Story World Absorption Scale (SWAS) and Narrative Comprehension: A 10-item, multiple-choice quiz keyed to specific plot points, character motivations, and thematic details of the stimulus narrative, administered immediately post-experience.
- Hybrid authoring pilots with cost accounting: Compare AI-seeded + human-edited workflows to fully manual pipelines; log authoring time (person-minutes) and perceived workload (e.g., NASA-TLX) to quantify efficiency.
- Latency measurement protocol: For each actuator class, report end-to-end event→tactile onset (ms) using a repeatable bench test (audio marker on the narrative timeline; microcontroller logs actuator trigger; contact mic/accelerometer on the device detects onset). Publish raw logs + script.
- Minimal methods checklist: Include HMD model, actuator model/placement map, calibration routine, per-scene latency budget, and exact synchronization mechanism (timestamp, trigger bus, or physics event).
- Dataset schema stub (JSON): {event_time_ms, event_tag, haptic:{actuator_id, placement, type, intensity, freq_hz, duration_ms, waveform}, device:{HMD, actuator_model}, latency_ms}.
4.5.2. Phase II—Head-to-Head Efficacy Studies
- Experimental Design: A preregistered non-inferiority trial with a within-subjects crossover design will be conducted. Participants (N ≈ 30) will experience two versions of the same 5-min narrative VR scene. Condition A (Manual Gold Standard): Haptics will be manually authored by an expert designer with significant experience in haptic feedback design (target authoring time: 180 person-minutes). Condition B (AI Procedural): Haptics will be generated by the target AI system based on the same audio–visual and narrative script inputs. The order of conditions will be counterbalanced.
- Power: A within-subjects non-inferiority margin of d = −0.30 on IPQ Spatial Presence with α = 0.05 and 1 − β = 0.80 (r ≈ 0.5 within-subject correlation) typically requires N ≈ 40–48. We will target N = 48 to allow for attrition/assumption violations.
- Endpoints and Statistical Analysis:
- Primary Efficacy Endpoint: The mean score on the IPQ Spatial Presence subscale. The non-inferiority margin will be set at a standardized mean difference of d = −0.3. The AI system will be considered non-inferior if the lower bound of the 95% confidence interval for the mean difference (AI minus Manual) is above −0.3 standard deviations.
- Primary Efficiency Endpoint: Total authoring time in person-minutes. A superiority hypothesis will be tested, requiring a statistically significant reduction of at least ≥50% in authoring time for the AI system compared to the manual condition.
- Secondary Endpoints: Scores on the Narrative Engagement Scale, Story World Absorption Scale, and the narrative comprehension quiz; perceived realism and congruence ratings; and author workload measured via the NASA-TLX.
- Analysis: Linear mixed-effects models with participant random intercepts; fixed effects for Condition and Order; Satterthwaite df; report Hedges g and 95% CIs. Primary NI test on IPQ-SP; Holm correction for secondary outcomes. Report both ITT and per-protocol sets. Share code and preregistration (OSF).
- Fidelity and Sharing: Manipulation checks will be included to confirm that participants perceived the haptic feedback. All stimuli, haptic patterns, source code, and anonymized data will be shared publicly alongside the publication.
4.5.3. Phase III—Closed-Loop Adaptivity and Contextual Moderators
- Adaptive Control with User-State Feedback: A closed-loop controller will be implemented where the generative model’s output parameters (e.g., intensity, complexity, frequency of haptic events) are modulated in real-time by user biosignals. An Empatica E4 wristband will provide electrodermal activity (EDA) and heart rate (HR) data. For example, in a suspenseful scene, rising EDA could trigger the AI to increase the intensity or frequency of subtle, unsettling haptic cues. The system will be engineered to target an event-to-haptic-onset latency of ≤100 ms and a biosignal-to-adaptation latency of ≤500 ms.
- Mapping the ‘Haptic Uncanny Valley’: A psychophysical study will be conducted to investigate the perceptual boundary where increased haptic complexity becomes distracting or unnatural. An AI model will generate a haptic effect (e.g., the sensation of footsteps on gravel) and systematically vary its complexity (e.g., number of unique vibration primitives, temporal randomness) along a continuum. Participants will rate each stimulus for perceived realism and distraction. The goal is to model the psychometric function and identify the ‘sweet spot’ where realism is maximized before distraction begins to increase, analogous to the uncanny valley in visual character rendering.
- Ethical Reporting and Safety: All studies in this phase will include rigorous reporting on calibration procedures, measured latency, sickness screening protocols, and explicit stop rules. Participants will be provided with full control over haptic intensity limits and a clear opt-out mechanism. All adverse events will be systematically recorded and reported. Enforce per-actuator max intensity and duty-cycle ceilings; EMS uses medically safe pulse widths/frequencies; adaptive controllers cannot increase intensity faster than X units/s; include an on-controller “panic stop”.
4.6. Broader Implications and Limitations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1. PRISMA-S Search Strategies
- VR/storytelling → (“virtual reality” OR vr OR immersive) AND (storytelling OR narrative OR “interactive drama”)
- Haptics → (haptic* OR vibrotactile OR thermal OR “force feedback” OR kinesthetic OR tactile OR “electrical muscle stimulation” OR EMS OR wind OR “affective haptics” OR “haptic storytelling”)
- AI/procedural → (procedural OR generative OR “procedurally generated” OR “artificial intelligence” OR ai OR “machine learning” OR algorithmic)
- Outcomes → (immersion OR presence OR engagement OR embodiment OR emotion* OR “user experience” OR ux OR comprehension OR transportation OR absorption OR “story world absorption”)
Appendix A.2. Scopus (Elsevier)—TITLE-ABS-KEY
Appendix A.3. Web of Science Core Collection
Appendix A.4. PubMed (NIH)—[Tiab]
Appendix A.5. PsycINFO (EBSCOhost Syntax; TI, AB)
Appendix A.6. IEEE Xplore (Advanced Search—All Metadata)
Appendix A.7. ACM Digital Library (Advanced Search—Title/Abstract/Author Keywords)
Appendix B. Filters: Publication Type = Proceedings, Journals; Language = English; Publication Date ≤ 31 July 2025
Grey-Literature Scoping (AI-Powered Procedural Haptics; Final Search: 7 September 2025)
| Paper | Year | Venue/Source (arXiv, CHI Workshop, SIGGRAPH Demo, Etc.) | Type (Preprint, Workshop Paper, Demo, Poster) | AI Approach (Rule-Based, ML, LLM, GAN, Other) | Haptic Method (Vibrotactile, Thermal, EMS, Force-Feedback, Other) | VR Narrative Context (Yes/No; Brief) | User Study with Human Participants? (Y/N) | Why Excluded from Synthesis (No User Study, Not Narrative VR, Non-Synchronous, Other) | Notes |
|---|---|---|---|---|---|---|---|---|---|
| Sung et al. [40] | 2025 | Project page/preprint | Preprint | Generative model (text → audio → vibration) | Vibrotactile | No; general haptic generation | Y (A/B, N = 32) | Not narrative VR user study | Generates vibrotactile from text prompts; design efficiency focus |
| Ren & Belpaeme [41] | 2025 | arXiv | Preprint | LLM | Vibrotactile | No; affective patterns, not narrative | Y (N = 32) | Not narrative VR | Emotion/gesture recognition with LLM-generated tactile patterns |
| Khan et al. [42] | 2025 | arXiv | Preprint | Deep learning (CNN/VLM) | Vibrotactile; Thermal | No; material/temperature inference | N | No user study; not narrative VR | Infers material & temperature to drive haptics |
| Dong [43] | 2025 | Open Research Repository (OCADU) [43] | Project / thesis | LLM + speech-driven generation | Tactile (unspecified) | Yes; personal storytelling MR | N (RTD prototypes) | No user study | Research Through Design; early prototypes |
| Jingu et al. [49] | 2025 | arXiv | Preprint | LLM + physical modeling | Vibrotactile | Possibly; full VR scenes (not user-evaluated) | N | No user study | Pipeline-level proposal; technical evaluation only |
| Kishor et al. [50] | 2025 | ResearchGate | Preprint/white paper | Reinforcement learning (DRL), others | Force feedback | No; rehab domain | N (technical/clinical claims) | Not narrative VR; no user study | Domain transfer from rehab; non-archival |
| Gehrke et al. [51] | 2025 | arXiv | Preprint | Reinforcement learning (RL) | Unspecified (adaptive XR haptics) | No; adaptive XR generic | Y (small N) | Not narrative VR | Compares RL from ratings vs. EEG; generic XR |
| Regimbal & Cooperstock [52] | 2024 | EuroHaptics 2024 (workshop/demo) [52] | Workshop/demo paper | Reinforcement learning (RL) | Force feedback | No; co-creative design tool | Y (qualitative, N = 5) | Not narrative VR; qualitative only | Co-creation with RL agent; exploratory |
| Heravi et al. [53] | 2024 | arXiv | Preprint | Deep learning (action-conditional) | Vibrotactile | No; texture rendering | Y (perceptual) | Not narrative VR | Perceptual comparison vs. SOTA; non-narrative |
| Hernández et al. [54] | 2025 | ResearchGate | Preprint | Graph neural network (GNN) | Force feedback | No; physically consistent rendering | N | No user study; not narrative VR | Technical validation; non-archival |
Appendix C
Appendix C.1. MMAT (2018) [31] Item-Level Tallies by Study Design
Appendix C.2. Screening (Applies to All Designs; n = 10)
- S1.
- Clear research questions—10/0/0
- S2.
- Data allow answering RQs—10/0/0
Appendix C.3. Quantitative Non-Randomized (n = 9)
- 3.1
- Representative of target population—0/0/9
- 3.2
- Appropriate/validated measurements—8/1/0
- 3.3
- Complete outcome data—9/0/0
- 3.4
- Confounders accounted (design/analysis)—5/3/1
- 3.5
- Intervention administered as intended—9/0/0
Appendix C.4. Mixed-Methods (n = 1)
- 5.1
- MM design appropriate—1/0/0
- 5.2
- Integration of strands relevant—1/0/0
- 5.3
- Interpretation of integration adequate—1/0/0
- 5.4
- Divergences/congruences addressed—0/1/0
- 5.5
- Component quality adequate—1/0/0
References
- Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
- Israr, A.; Zhao, S.; Schwalje, K.; Klatzky, R.; Lehman, J. Feel Effects: Enriching Storytelling with Haptic Feedback. ACM Trans. Appl. Percept. 2014, 11, 17. [Google Scholar] [CrossRef]
- Bolanowski, S.J., Jr.; Gescheider, G.A.; Verrillo, R.T.; Checkosky, C.M. Four Channels Mediate the Mechanical Aspects of Touch. J. Acoust. Soc. Am. 1988, 84, 1680–1694. [Google Scholar] [CrossRef]
- Burdea, G.C. Keynote Address: Haptic Feedback for Virtual Reality. In Proceedings of the International Workshop on Virtual Prototyping, Laval, France, 17–29 May 1999; pp. 87–96. Available online: https://www.scirp.org/reference/referencespapers?referenceid=3068290 (accessed on 31 July 2025).
- Gibson, J.J. The Senses Considered as Perceptual Systems; Reprinted; Greenwood Press: Westport, CT, USA, 1983; ISBN 978-0-313-23961-8. [Google Scholar]
- Eid, M.A.; Al Osman, H. Affective Hapti cs: Current Research and Future Directions. IEEE Access 2016, 4, 26–40. [Google Scholar] [CrossRef]
- Pacchierotti, C.; Sinclair, S.; Solazzi, M.; Frisoli, A.; Hayward, V.; Prattichizzo, D. Wearable Haptic Systems for the Fingertip and the Hand: Taxonomy, Review, and Perspectives. IEEE Trans. Haptics 2017, 10, 580–600. [Google Scholar] [CrossRef]
- Slater, M.; Wilbur, S. A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence Teleoper. Virtual Environ. 1997, 6, 603–616. [Google Scholar] [CrossRef]
- Jerald, J. The VR Book: Human-Centered Design for Virtual Reality; ACM Books; Association for Computing Machinery and Morgan & Claypool: New York, NY, USA; San Rafael, CA, USA, 2016; ISBN 978-1-970001-12-9. [Google Scholar]
- Witmer, B.G.; Singer, M.J. Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence Teleoper. Virtual Environ. 1998, 7, 225–240. [Google Scholar] [CrossRef]
- Lombard, M.; Ditton, T. At the Heart of It All: The Concept of Presence. J. Comput. Mediat. Commun. 1997, 3, JCMC321. [Google Scholar] [CrossRef]
- Slater, M. Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
- Zwaan, R.A.; Langston, M.C.; Graesser, A.C. The Construction of Situation Models in Narrative Comprehension: An Event-Indexing Model. Psychol. Sci. 1995, 6, 292–297. [Google Scholar] [CrossRef]
- Hecquard, J.; Saint-Aubert, J.; Argelaguet, F.; Pacchierotti, C.; Lécuyer, A.; Macé, M. Fostering Empathy in Social Virtual Reality through Physiologically Based Affective Haptic Feedback. In Proceedings of the 2023 IEEE World Haptics Conference (WHC), Delft, The Netherlands, 10–13 July 2023; pp. 78–84. [Google Scholar]
- Aylett, R. Narrative in Virtual Environments—Towards Emergent Narrative. In AAAI Technical Report FS-99-01; AAAI Press: Menlo Park, CA, USA, 1999; Available online: https://www.researchgate.net/publication/266211095_Narrative_in_Virtual_Environments_-Towards_Emergent_Narrative (accessed on 31 July 2025).
- Clepper, G.; Gopinath, A.; Martinez, J.S.; Farooq, A.; Tan, H.Z. A Study of the Affordance of Haptic Stimuli in a Simulated Haunted House. In Proceedings of the HCII 2022, Lecture Notes in Computer Science (LNCS), Virtual, 26 June–1 July 2022; Springer: Cham, Switzerland, 2022; Volume 13321, pp. 182–197. [Google Scholar]
- Khamis, M.; Schuster, N.; George, C.; Pfeiffer, M. ElectroCutscenes: Realistic Haptic Feedback in Cutscenes of Virtual Reality Games Using Electric Muscle Stimulation. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology (VRST ’19), Parramatta, NSW, Australia, 12–15 November 2019; ACM: New York, NY, USA, 2019; p. 10. [Google Scholar]
- Desnoyers-Stewart, J.; Bergamo Meneghini, M.; Stepanova, E.R.; Riecke, B.E. Real Human Touch: Performer-Facilitated Touch Enhances Presence and Embodiment in Immersive Performance. Front. Virtual Real. 2024, 4, 1336581. Available online: https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.1336581 (accessed on 31 July 2025). [CrossRef]
- Togelius, J.; Yannakakis, G.N.; Stanley, K.O.; Browne, C. Search-Based Procedural Content Generation: A Taxonomy and Survey. IEEE Trans. Comput. Intell. AI Games 2011, 3, 172–186. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- PROSPERO. About PROSPERO/What Is Eligible. National Institute for Health and Care Research. Available online: https://www.crd.york.ac.uk/prospero/ (accessed on 13 September 2025).
- OSF Registries. Register Your Research. Available online: https://www.cos.io/products/osf-registries (accessed on 13 September 2025).
- Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
- Sierra Rativa, A.; Postma, M.; van Zaanen, M. Can Virtual Reality Act as an Affective Machine? The Wild Animal Embodiment Experience and the Importance of Appearance. In Proceedings of the MIT LINC 2019 Conference, EPiC Series in Education Science. Cambridge, MA, USA, 18–20 June 2019; pp. 214–223. [Google Scholar]
- García-Valle, G.; Ferre, M.; Breñosa, J.; Vargas, D. Evaluation of Presence in Virtual Environments: Haptic Vest and User’s Haptic Skills. IEEE Access 2018, 6, 7224–7233. [Google Scholar] [CrossRef]
- Ooms, S.; Lee, M.; Stepanova, E.R.; Cesar, P.; El Ali, A. Haptic Biosignals Affect Proxemics Toward Virtual Reality Agents. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), Yokohama, Japan, 26 April–1 May 2025; ACM: New York, NY, USA, 2025; p. 18. [Google Scholar]
- Krogmeier, C.; Mousas, C.; Whittinghill, D. Human, Virtual Human, Bump! A Preliminary Study on Haptic Feedback. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 1032–1033. [Google Scholar]
- Ranasinghe, N.; Jain, P.; Nguyen, T.N.T.; Koh, K.C.R.; Tolley, D.; Karwita, S.; Lin, L.-Y.; Yan, L.; Shamaiah, K.; Chow, E.W.T.; et al. Season Traveller: Multisensory Narration for Enhancing the Virtual Reality Experience. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montréal, QC, Canada, 21–26 April 2018; ACM: Montreal, QC, Canada, 2018; pp. 1–13. [Google Scholar]
- Prasanth, S. Haptic Contact in Immersive 360° Cinematic Environment. Master’s Thesis, University of Canterbury, HIT Lab NZ, College of Engineering, Christchurch, New Zealand, February 2018. [Google Scholar]
- Lakens, D. Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A Practical Primer for t-Tests and ANOVAs. Front. Psychol. 2013, 4, 863. [Google Scholar] [CrossRef]
- Hong, Q.N.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B.; O’Cathain, A.; et al. The Mixed Methods Appraisal Tool (MMAT) Version 2018 for Information Professionals and Researchers. Educ. Inf. 2018, 34, 285–291. [Google Scholar] [CrossRef]
- van Erp, J.B.F.; Toet, A. Social Touch in Human–Computer Interaction. Front. Digit. Humanit. 2015, 2, 2. Available online: https://www.frontiersin.org/journals/digital-humanities/articles/10.3389/fdigh.2015.00002 (accessed on 31 July 2025). [CrossRef]
- Schubert, T.; Friedmann, F.; Regenbrecht, H. The Experience of Presence: Factor Analytic Insights. Presence Teleoperators Virtual Environ. 2001, 10, 266–281. [Google Scholar] [CrossRef]
- Lessiter, J.; Freeman, J.; Keogh, E.; Davidoff, J. A Cross-Media Presence Questionnaire: The ITC-Sense of Presence Inventory. Presence Teleoper. Virtual Environ. 2001, 10, 282–297. [Google Scholar] [CrossRef]
- Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and Defining the Experience of Immersion in Games. Int. J. Hum. Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
- Green, M.C.; Brock, T.C. The Role of Transportation in the Persuasiveness of Public Narratives. J. Pers. Soc. Psychol. 2000, 79, 701–721. [Google Scholar] [CrossRef]
- Appel, M.; Gnambs, T.; Richter, T.; Green, M.C. The Transportation Scale–Short Form (TS–SF). Media Psychol. 2015, 18, 243–266. [Google Scholar] [CrossRef]
- Kuijpers, M.M.; Hakemulder, F.; Tan, E.S.; Doicaru, M.M. Exploring Absorbing Reading Experiences: Developing and Validating a Self-Report Scale to Measure Story World Absorption. Sci. Study Lit. 2014, 4, 89–122. [Google Scholar] [CrossRef]
- Bradley, M.M.; Lang, P.J. Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
- Sung, Y.; John, K.; Yoon, S.H.; Seifi, H. HapticGen: Generative Text-to-Vibration Model for Streamlining Haptic Design. 2025. Available online: https://hapticgen.hcitech.org/static/pdfs/paper.pdf (accessed on 7 September 2025).
- Ren, Q.; Belpaeme, T. Touched by ChatGPT: Using an LLM to Drive Affective Tactile Interaction. arXiv 2025, arXiv:2501.07224. Available online: https://arxiv.org/html/2501.07224v1 (accessed on 7 September 2025). [CrossRef]
- Khan, M.H.; Altamirano Cabrera, M.; Iarchuk, D.; Mahmoud, Y.; Trinitatova, D.; Tokmurziyev, I.; Tsetserukou, D. HapticVLM: VLM-Driven Texture Recognition Aimed at Intelligent Haptic Interaction. arXiv 2025, arXiv:2505.02569. Available online: https://arxiv.org/html/2505.02569v1 (accessed on 7 September 2025).
- Dong, A. LUMIEA: Enhancing User Engagement in Storytelling: Empowering Personal Narratives through AI-Generated Environments and Tactile Interaction in Mixed Reality. Master’s Thesis, OCAD University Open Research Repository, Toronto, ON, Canada, 2025. [Google Scholar]
- Schuemie, M.J.; van der Straaten, P.; Krijn, M.; van der Mast, C.A.P.G. Research on Presence in Virtual Reality: A Survey. Cyberpsychol. Behav. 2001, 4, 183–201. [Google Scholar] [CrossRef]
- Kilteni, K.; Groten, R.; Slater, M. The Sense of Embodiment in Virtual Reality. Presence Teleoper. Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef]
- ISO (2001/2011) ISO 13091-1; Mechanical Vibration—Vibrotactile Perception Thresholds. International Organization for Standardization: Geneva, Switzerland. Available online: https://www.iso.org/popular-standards.html (accessed on 3 October 2025).
- Wobbrock, J.O.; Kane, S.K.; Gajos, K.Z.; Harada, S.; Froehlich, J. Ability-Based Design: Concept, Principles and Examples. ACM Trans. Access. Comput. 2011, 3, 1–27. [Google Scholar] [CrossRef]
- Busselle, R.; Bilandzic, H. Measuring Narrative Engagement. Media Psychol. 2009, 12, 321–347. [Google Scholar] [CrossRef]
- Jingu, A.; Strohmeier, P.; AliAbbasi, E.; Steimle, J. Scene2Hap: Combining LLMs and Physical Modeling for Automatically Generating Vibrotactile Signals for Full VR Scenes. arXiv 2025, arXiv:2504.19611. Available online: https://arxiv.org/pdf/2504.19611 (accessed on 7 September 2025). [CrossRef]
- Kishor, I.; Goyal, P.; Gantla, H.; Goyal, P.; Mamodiya, U. AI-Driven Haptic Technologies Revolutionizing Patient Rehabilitation. In Integrating AI With Haptic Systems for Smarter Healthcare Solutions; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 47–84. [Google Scholar]
- Gehrke, L.; Koselevs, A.; Klug, M.; Gramann, K. Neuroadaptive Haptics: Comparing Reinforcement Learning from Explicit Ratings and Neural Signals for Adaptive XR Systems. arXiv 2025, arXiv:2504.15984. Available online: https://arxiv.org/html/2504.15984v2 (accessed on 7 September 2025). [CrossRef]
- Regimbal, J.; Cooperstock, J.R. Investigating Haptic Co-creation with Reinforcement Learning. In Haptics: Understanding Touch; Technology and Systems; Applications and Interaction; Kajimoto, H., Lopes, P., Pacchierotti, C., Basdogan, C., Gori, M., Lemaire-Semail, B., Marchal, M., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2024; Volume 14769, pp. 448–454. [Google Scholar] [CrossRef]
- Heravi, N.; Culbertson, H.; Okamura, A.M.; Bohg, J. Development and Evaluation of a Learning-Based Model for Real-time Haptic Texture Rendering. arXiv 2024, arXiv:2212.13332. Available online: https://arxiv.org/html/2212.13332v3 (accessed on 7 September 2025). [CrossRef]
- Hernández, Q.; Martins, P.; Tesan, L.; Alfaro, I.; González, D.; Chinesta, F.; Cueto, E. A Neural Network Architecture for Physically-Consistent Haptic Rendering. Virtual Real. 2025, 29, 114. [Google Scholar] [CrossRef]





| Paper | N | Design | Narrative Context | Haptic Intervention (Type/Placement) | Outcomes & Measures | Main Quantitative Result (Direction + p/ES) | Effect Size (Standardized) |
|---|---|---|---|---|---|---|---|
| Clepper et al. [16] | 31 | Within-subjects (experimental) | Haunted house séance | Vibrotactile palm-based array | Immersion (adjective ratings) | Unique, handcrafted signals > generic multiplexed signal (↑ immersion), p < 0.05 | — (insufficient) |
| Sierra Rativa et al. [24] | 51 | Between-subjects (experimental) | Wild animal embodiment (distress event) | Vibrotactile haptic vest | Immersion; Empathy; Perceived pain (custom scales) | Natural body appearance ↑ immersion (p < 0.05); perceived pain correlated with empathy (+) | — (insufficient) |
| Khamis et al. [17] | 22 | Repeated measures (within-subjects) | VR game cutscenes | EMS (arm electrodes) + vibrotactile | Presence, Realism (IPQ) | EMS > vibrotactile & no haptics for presence (p < 0.01) and realism (p < 0.05) | — (insufficient) |
| García Valle et al. [25] | 23 | Experimental | Post-explosion train station | Haptic vest (tactile + thermal) | Presence, Realism (PQ) | Haptic vest ↑ presence & realism; p < 0.05 | — (insufficient) |
| Hecquard et al. [14] | 38 | Within-subjects (experimental) | Social VR talk with stressed presenter | Vibrotactile + pressure (wristband, belt); physiologically mapped | Empathy, Presence, Anxiety, Engagement (IPQ; prefs) | Sympathetic haptics ↑ empathy (p < 0.001); presence NS; order effects noted | — (insufficient) |
| Ooms et al. [26] | 31 | Within-subjects (experimental) | Close encounters with virtual agents | Vibrotactile + thermal (controller, custom device) | Arousal, Discomfort, Presence (IPQ; ratings) | Vibrotactile heartbeats ↑ perceived arousal & discomfort; presence NS (p NR) | — (p NR) |
| Krogmeier et al. [27] | 8 | Between-subjects (experimental) | Urban crosswalk collision | Vibrotactile haptic vest (virtual human ‘bump’) | Presence (SUS), Embodiment, Arousal (GSR) | Haptics ↑ presence (p = 0.004) & embodiment (p = 0.005); GSR NS | r ≈ 0.98 (presence; from t(3) = 8.0) |
| Desnoyers Stewart et al. [18] | 108 | Mixed-methods, experimental | Immersive dance performance | Performer-facilitated touch; physical prop | Presence (TPI), Embodiment, Affect, Narrative engagement | Full human touch ↑ presence, embodiment, valence, arousal (p NR) | — (p NR) |
| Ranasinghe et al. [28] | 20 | Experimental | Hot air balloon journey through seasons | Custom HMD add-on: thermal, wind, olfactory | Presence (PQ), Engagement (GEQ), Arousal (EDA, HR) | Multisensory > AV-only for presence & engagement; reduced EDA (frustration); p NR | — (p NR) |
| Sasikumar et al. [29] | 32 | 2 × 2 factorial (experimental) | 360° cinematic battle scene | Wearable vest; non-wearable wind/floor vibrations | Presence (IPQ), Immersion | Haptics ↑ spatial presence (p < 0.001) & realism (p < 0.05); ↑ overall experience | — (insufficient) |
| Item (Short) | Venue/Type | AI Approach | Haptic Method | Narrative VR? | User Study? | Why Excluded |
|---|---|---|---|---|---|---|
| Sung et al. [40] | CHI ’25 (peer-reviewed) | Generative (text → vibration) | Vibrotactile | No | Yes (non-narrative) | Not narrative VR |
| Ren & Belpaeme [41] | Preprint (arXiv) | LLM-driven patterns | Vibrotactile (wearable sleeve) | No | Yes (non-narrative) | Not narrative VR |
| Khan et al. [42] | Preprint (arXiv) | VLM + CNN (vision → tactile/thermal) | Vibrotactile & thermal | No | No (system eval only) | No VR/narrative user study |
| Dong [43] | Thesis/Exhibition (OCAD U) | LLM + speech | Tactile (unspecified) | Yes (MR) | No | No user study |
| Paper | Immersion | Presence | Narrative Comprehension | Emotional Engagement |
|---|---|---|---|---|
| Clepper et al. [16] | Unique, handcrafted signals rated as more immersive than a generic, multiplexed signal. | Not Measured | Not Measured | Not Measured |
| Sierra Rativa et al. [24] | Natural body appearance led to higher immersion (p < 0.05). Immersion correlated with perceived pain and empathy. | Not Measured | Not Measured | Perceived pain from haptic vest correlated positively with dispositional empathy. |
| Khamis et al. [17] | EMS led to higher realism and involvement scores on IPQ (p < 0.05). | EMS significantly increased spatial presence and sense of “being there” compared to vibrotactile and no haptics (p < 0.01). | Not Measured | Not Measured |
| García-Valle et al. [25] | Not Measured | Haptic vest improved user-reported sense of presence and realism. | Not Measured | Not Measured |
| Hecquard et al. [14] | Not Measured | No significant difference between haptic conditions, but order effect observed. | “Time on presenter” measured as a proxy for engagement. | Sympathetic haptic feedback was significantly preferred for fostering empathy (p < 0.001). |
| Ooms et al. [26] | Not Measured | No significant difference in IPQ scores across haptic conditions. | Not Measured | Vibrotactile heartbeats significantly increased perceived arousal and discomfort. |
| Krogmeier et al. [27] | Not Measured | Haptic feedback significantly increased presence scores (p = 0.004). | Not Measured | No significant difference in objective GSR arousal with small sample. |
| Ranasinghe et al. [28] | Multisensory (including haptics) configuration led to highest engagement scores on GEQ. | Any added sensory modality (wind, thermal) improved presence over AV-only. All modalities combined (ST) provided the highest presence. | Not Measured | Multisensory ST configuration reduced EDA (frustration) and stabilized HR, suggesting improved emotional state. |
| Sasikumar et al. [29] | Haptic feedback significantly increased overall experience scores, a proxy for immersion (p < 0.01). | Both wearable and non-wearable haptics significantly increased spatial presence (p < 0.001) and realism (p < 0.05). | Not Measured | Not Measured |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Perumal, V.; Shah, Z.J. AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technol. Interact. 2026, 10, 9. https://doi.org/10.3390/mti10010009
Perumal V, Shah ZJ. AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technologies and Interaction. 2026; 10(1):9. https://doi.org/10.3390/mti10010009
Chicago/Turabian StylePerumal, Vimala, and Zeeshan Jawed Shah. 2026. "AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review" Multimodal Technologies and Interaction 10, no. 1: 9. https://doi.org/10.3390/mti10010009
APA StylePerumal, V., & Shah, Z. J. (2026). AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review. Multimodal Technologies and Interaction, 10(1), 9. https://doi.org/10.3390/mti10010009

