applsci-logo

Journal Browser

Journal Browser

Augmented and Virtual Reality for Smart Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 February 2026 | Viewed by 2053

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Kwangwoon University, 447-1, Wolgye-Dong, Nowon-Gu, Seoul 139-701, Republic of Korea
Interests: computer animation; human–computer interfaces; virtual reality; embodied agents

Special Issue Information

Dear Colleagues,

This Special Issue explores the transformative potential of augmented reality (AR) and virtual reality (VR) in developing intelligent and innovative applications across various domains. We aim to highlight cutting-edge research, novel methodologies, and practical applications that leverage AR/VR technologies to create more immersive, interactive, and efficient solutions for real-world challenges. Topics of particular interest include, but are not limited to, advanced human–computer interaction (HCI) paradigms within AR/VR environments, the seamless integration of artificial intelligence (AI) with AR/VR for profoundly enhanced user experiences, and significant advancements in both hardware and software specifically tailored for smart AR/VR applications. We especially welcome contributions focusing on controlling embodied agents within virtual environments, innovative approaches to building and rendering augmented/virtual environments, and the development of interactive simulations for various applications. Furthermore, we invite research exploring the application of AR/VR in critical sectors such as education, healthcare, advanced manufacturing, interactive entertainment, and smart urban planning. We highly encourage submissions that present original research, comprehensive review articles, and insightful case studies that collectively demonstrate the substantial and evolving contributions of AR/VR to the development of truly smart and impactful applications.

Dr. Kang Hoon Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • augmented reality (AR)
  • virtual reality (VR)
  • mixed reality (MR)
  • human–computer interaction (HCI)
  • immersive technologies
  • smart applications
  • ubiquitous computing
  • artificial intelligence (AI) for AR/VR
  • computer vision for AR/VR
  • real-time rendering
  • 3D interaction
  • embodied agents
  • digital twins
  • wearable technology
  • virtual environment creation
  • interactive simulation
  • agent control
  • interactive entertainment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4414 KB  
Article
Virtual Reality Exposure Therapy for Foreign Language Speaking Anxiety: Evidence from Electroencephalogram Signals and Subjective Self-Report Data
by Amir Pourhamidi, Chanwoo Kim and Hyun K. Kim
Appl. Sci. 2025, 15(23), 12574; https://doi.org/10.3390/app152312574 - 27 Nov 2025
Viewed by 352
Abstract
This study examines the efficacy of virtual reality exposure therapy (VRET) in alleviating foreign language anxiety (FLA) among university students. Although research exists on FLA, interventions have relied on self-reporting measures, leaving a gap in understanding physiological indicators and anxiety reduction. While previous [...] Read more.
This study examines the efficacy of virtual reality exposure therapy (VRET) in alleviating foreign language anxiety (FLA) among university students. Although research exists on FLA, interventions have relied on self-reporting measures, leaving a gap in understanding physiological indicators and anxiety reduction. While previous research has explored either the therapeutic potential of virtual reality or the neurophysiological correlations of anxiety through electroencephalography (EEG), few have integrated these methodologies within a single experimental framework. This study combined the foreign language classroom anxiety scale (FLCAS) with (EEG) data to capture subjective and neural responses to anxiety in second language (L2) speaking. The participants (n = 20) completed language speaking tasks both before and after VR intervention, which exposed them to anxiety-inducing conditions replicating language challenges. During these tasks, brainwave signals were recorded, focusing on frontal alpha asymmetry (FAA) and alpha power (F3, F4), indicating neural activity associated with stress and emotional regulation. Results showed participants experienced a significant decrease (p = 0.017 < 0.05) in self-reported FLCAS scores after VRET. The reduction in FLA showed a negative correlation with increased alpha power at F3 (r = −0.55, p = 0.012), suggesting a link between left frontal neural regulation and anxiety reduction. These findings underscored VRET’s effectiveness in influencing emotional responses during L2-speaking tasks. Full article
(This article belongs to the Special Issue Augmented and Virtual Reality for Smart Applications)
Show Figures

Figure 1

19 pages, 1812 KB  
Article
Open-Data-Driven Unity Digital Twin Pipeline: Automatic Terrain and Building Generation with Unity-Native Evaluation
by Donghyun Woo, Hyunbin Choi, Ruben D. Espejo Jr., Joongrock Kim and Sunjin Yu
Appl. Sci. 2025, 15(21), 11801; https://doi.org/10.3390/app152111801 - 5 Nov 2025
Viewed by 688
Abstract
The creation of simulation-ready digital twins for real-world simulations is hindered by two key challenges: the lack of widely consistent, application-ready open access terrain data and the inadequacy of conventional evaluation metrics to predict practical, in-engine performance. This paper addresses these challenges by [...] Read more.
The creation of simulation-ready digital twins for real-world simulations is hindered by two key challenges: the lack of widely consistent, application-ready open access terrain data and the inadequacy of conventional evaluation metrics to predict practical, in-engine performance. This paper addresses these challenges by presenting an end-to-end, open-data pipeline that generates simulation-ready terrain and procedural 3D objects for the Unity engine. A central finding of this work is that the architecturally advanced Swin2SR transformer exhibits severe statistical instability when applied to Digital Elevation Model (DEM) data. We analyze this instability and introduce a lightweight, computationally efficient stabilization technique adapted from climate science—quantile mapping (qmap)—as a diagnostic remedy which restores the model’s physical plausibility without retraining. To overcome the limitations of pixel-based metrics, we validate our pipeline using a three-axis evaluation framework that integrates data-level self-consistency with application-centric usability metrics measured directly within Unity. Experimental results demonstrate that qmap stabilization dramatically reduces Swin2SR’s large error (a 45% reduction in macro RMSE from 47.4 m to 26.1 m). The complete pipeline, using a robust SwinIR model, delivers excellent in-engine performance, achieving a median object grounding error of 0.30 m and real-time frame rates (≈100 FPS). This study provides a reproducible workflow and underscores a crucial insight for applying AI in scientific domains: domain-specific stabilization and application-centric evaluation are indispensable for the reliable deployment of large-scale vision models. Full article
(This article belongs to the Special Issue Augmented and Virtual Reality for Smart Applications)
Show Figures

Figure 1

15 pages, 1298 KB  
Article
From Overtrust to Distrust: A Simulation Study on Driver Trust Calibration in Conditional Automated Driving
by Heetae Hwang, Juhyeon Kim, Hojoon Kim, Heewon Min and Kyudong Park
Appl. Sci. 2025, 15(21), 11342; https://doi.org/10.3390/app152111342 - 22 Oct 2025
Viewed by 686
Abstract
Conditional automated driving delegates routine control to automation while keeping drivers responsible for supervision and timely takeovers. In this context, safety and usability hinge on calibrated trust, a state between overtrust and distrust that aligns reliance with actual system capabilities. We investigated how [...] Read more.
Conditional automated driving delegates routine control to automation while keeping drivers responsible for supervision and timely takeovers. In this context, safety and usability hinge on calibrated trust, a state between overtrust and distrust that aligns reliance with actual system capabilities. We investigated how calibrated trust relates to concurrent behavior during conditional automation in a driving-simulator study (n = 26). After a brief familiarization block, drivers completed four takeover request (TOR) exposures while performing a non-driving-related task (NDRT). Trust was assessed with a validated multi-item inventory. NDRT engagement was operationalized as successful Surrogate Reference Task (SuRT) clicks per second, and takeover behavior was indexed by TOR reaction time (TOR-RT) from TOR onset to the first valid control input. The results showed that higher trust was associated with greater ND RT throughput during automated driving, whereas TOR-RT did not change significantly across repeated exposures, consistent with familiarization. In this sample, we did not observe a systematic penalty in TOR-RT associated with higher trust; however, confidence-interval benchmarks indicate that modest delays cannot be ruled out. This suggests that, after brief onboarding, calibrated trust can coexist with timely safety-critical responses within the limits of our design. These findings tentatively support interface and training strategies that promote calibrated trust (e.g., predictable TOR policies, transparent capability boundaries, and short onboarding) to help drivers navigate between overtrust and distrust. Full article
(This article belongs to the Special Issue Augmented and Virtual Reality for Smart Applications)
Show Figures

Figure 1

Back to TopTop