applsci-logo

Journal Browser

Journal Browser

Extended Reality (XR) and User Experience (UX) Technologies

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 1922

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan
Interests: computer-assisted education; user interface; simulation and training; robotics; personal mobility; machine learning; virtual and augmented reality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
Interests: XR; digital game; HCI; IxD in education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a forthcoming Special Issue on "Extended Reality (XR) and User Experience (UX) Technologies", and we cordially invite researchers, practitioners, and developers to submit their original work.

Extended Reality (XR), encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), is rapidly transforming how humans interact with digital content and the physical world. As XR technologies continue to evolve, so do the challenges and opportunities for creating engaging, intuitive, and accessible user experiences.

This Special Issue aims to explore cutting-edge research, novel applications, and theoretical advancements at the intersection of XR and user experience (UX). We seek submissions that investigate new interaction techniques, usability and accessibility considerations, system evaluations, content design, human–computer interactions (HCIs), and the psychological and social impacts of XR systems.

Topics of interest include, but are not limited to, the following:

  • Novel user interfaces and interaction paradigms in XR;
  • UX evaluation methods and metrics for immersive environments;
  • Adaptive and personalized XR experiences;
  • Haptics, audio, and multimodal feedback in XR;
  • User behavior and engagement in virtual and augmented environments;
  • Accessibility and inclusivity in XR design;
  • Applications in education, healthcare, entertainment, and industry;
  • Cognitive and emotional aspects of XR user experience;
  • Ethical and privacy considerations in XR interactions.

Submission Guidelines and Important Dates

(Included here are submission deadlines, review process timelines, publication dates, and instructions or links to submission platforms.)

We welcome original research articles, case studies, and survey papers. All submissions will undergo a rigorous peer-review process.

Join us in shaping the future of XR and user experience by contributing your work to this Special Issue. We look forward to receiving your submissions.

Dr. Katashi Nagao
Dr. Hiroyuki Mitsuhara
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • extended reality
  • user experience
  • human–computer interaction
  • body/hand tracking
  • haptics
  • emotional interaction
  • psychophysiology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

25 pages, 5055 KiB  
Article
FlickPose: A Hand Tracking-Based Text Input System for Mobile Users Wearing Smart Glasses
by Ryo Yuasa and Katashi Nagao
Appl. Sci. 2025, 15(15), 8122; https://doi.org/10.3390/app15158122 - 22 Jul 2025
Viewed by 473
Abstract
With the growing use of head-mounted displays (HMDs) such as smart glasses, text input remains a challenge, especially in mobile environments. Conventional methods like physical keyboards, voice recognition, and virtual keyboards each have limitations—physical keyboards lack portability, voice input has privacy concerns, and [...] Read more.
With the growing use of head-mounted displays (HMDs) such as smart glasses, text input remains a challenge, especially in mobile environments. Conventional methods like physical keyboards, voice recognition, and virtual keyboards each have limitations—physical keyboards lack portability, voice input has privacy concerns, and virtual keyboards struggle with accuracy due to a lack of tactile feedback. FlickPose is a novel text input system designed for smart glasses and mobile HMD users, integrating flick-based input and hand pose recognition. It features two key selection methods: the touch-panel method, where users tap a floating UI panel to select characters, and the raycast method, where users point a virtual ray from their wrist and confirm input via a pinch motion. FlickPose uses five left-hand poses to select characters. A machine learning model trained for hand pose recognition outperforms Random Forest and LightGBM models in accuracy and consistency. FlickPose was tested against the standard virtual keyboard of Meta Quest 3 in three tasks (hiragana, alphanumeric, and kanji input). Results showed that raycast had the lowest error rate, reducing unintended key presses; touch-panel had more deletions, likely due to misjudgments in key selection; and frequent HMD users preferred raycast, as it maintained input accuracy while allowing users to monitor their text. A key feature of FlickPose is adaptive tracking, which ensures the keyboard follows user movement. While further refinements in hand pose recognition are needed, the system provides an efficient, mobile-friendly alternative for HMD text input. Future research will explore real-world application compatibility and improve usability in dynamic environments. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

36 pages, 6020 KiB  
Article
“It Felt Like Solving a Mystery Together”: Exploring Virtual Reality Card-Based Interaction and Story Co-Creation Collaborative System Design
by Yaojiong Yu, Mike Phillips and Gianni Corino
Appl. Sci. 2025, 15(14), 8046; https://doi.org/10.3390/app15148046 - 19 Jul 2025
Viewed by 427
Abstract
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to [...] Read more.
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to the storyline, modify plot trajectories, and craft characters, thereby facilitating a dynamic storytelling experience. Through advanced virtual reality interaction design, collaboration and social engagement can be further enriched to encourage active participation. This study investigates the facilitation of narrative creation and enhancement of storytelling skills in virtual reality by leveraging existing research on story co-creation design and virtual reality technology. Subsequently, we developed and evaluated the virtual reality card-based collaborative storytelling platform Co-Relay. By analyzing interaction data and user feedback obtained from user testing and experimental trials, we observed substantial enhancements in user engagement, immersion, creativity, and fulfillment of emotional and social needs compared to a conventional web-based storytelling platform. The primary contribution of this study lies in demonstrating how the incorporation of story co-creation can elevate storytelling proficiency, plot development, and social interaction within the virtual reality environment. Our novel methodology offers a fresh outlook on the design of collaborative narrative creation in virtual reality, particularly by integrating participatory multi-user storytelling platforms that blur the traditional boundaries between creators and audiences, as well as between fiction and reality. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

19 pages, 7664 KiB  
Article
Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications
by Aida Vidal-Balea, Oscar Blanco-Novoa, Paula Fraga-Lamas and Tiago M. Fernández-Caramés
Appl. Sci. 2025, 15(13), 6959; https://doi.org/10.3390/app15136959 - 20 Jun 2025
Viewed by 557
Abstract
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative [...] Read more.
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative experiences. Such anchors are digital reference points tied to specific positions in the physical world that allow virtual content in MR applications to remain accurately aligned to the real environment, thus being an essential tool for building collaborative MR experiences. This anchor synchronization system takes advantage of the use of local anchor storage to optimize the sharing process and to exchange the anchors only when necessary. The framework integrates Unity, Mirror and Mixed Reality Toolkit (MRTK) to support seamless interoperability between Microsoft HoloLens 2 devices and desktop computers, with the addition of external IoT interaction. As a proof of concept, a collaborative multiplayer game was developed to illustrate the multi-platform and anchor sharing capabilities of the proposed system. The experiments were performed in Local Area Network (LAN) and Wide Area Network (WAN) environments, and they highlight the importance of efficient anchor management in large-scale MR environments and demonstrate the effectiveness of the system in handling anchor transmission across varying levels of spatial complexity. Specifically, the obtained results show that the developed framework is able to obtain anchor transmission times that start around 12.7 s for the tested LAN/WAN networks and for small anchor setups, and to roughly 86.02–87.18 s for complex physical scenarios where room-sized anchors are required. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

Other

Jump to: Research

16 pages, 1318 KiB  
Perspective
Shared Presence via XR Communication and Interaction Within a Dynamically Updated Digital Twin of a Smart Space: Conceptual Framework and Research Challenges
by Lea Skorin-Kapov, Maja Matijasevic, Ivana Podnar Zarko, Mario Kusek, Darko Huljenic, Vedran Skarica, Darian Skarica and Andrej Grguric
Appl. Sci. 2025, 15(16), 8838; https://doi.org/10.3390/app15168838 - 11 Aug 2025
Viewed by 153
Abstract
The integration of emerging eXtended Reality (XR) technologies, digital twins (DTs), smart environments, and advanced mobile and wireless networks is set to enable novel forms of immersive interaction and communication. This paper proposes a high-level conceptual framework for shared presence via XR-based communication [...] Read more.
The integration of emerging eXtended Reality (XR) technologies, digital twins (DTs), smart environments, and advanced mobile and wireless networks is set to enable novel forms of immersive interaction and communication. This paper proposes a high-level conceptual framework for shared presence via XR-based communication and interaction within a virtual reality (VR) representation of the digital twin of a smart space. The digital twin is continuously updated and synchronized—both spatially and temporally—with a physical smart space equipped with sensors and actuators. This architecture enables interactive experiences and fosters a sense of co-presence between a local user in the smart physical environment utilizing augmented reality (AR) and a remote VR user engaging through the digital counterpart. We present our lab deployment architecture used as a basis for ongoing experimental work related to testing and integrating functionalities defined in the conceptual framework. Finally, key technology requirements and research challenges are outlined, aiming to provide a foundation for future research efforts in immersive, interconnected XR systems. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

Back to TopTop