Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,663)

Search Parameters:
Keywords = augmented-reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 12608 KB  
Article
An Audio Augmented Reality Navigation System for Blind and Visually Impaired People Integrating BIM and Computer Vision
by Leonardo Messi, Massimo Vaccarini, Alessandra Corneli, Alessandro Carbonari and Leonardo Binni
Buildings 2025, 15(18), 3252; https://doi.org/10.3390/buildings15183252 - 9 Sep 2025
Abstract
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed [...] Read more.
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed infrastructure is an open challenge. While numerous solutions have been proposed by researchers in recent decades, a comprehensive navigation system that can support BVIP mobility in mixed and unprepared environments is still missing. This study proposes a novel navigation system that enables BVIP to request directions and be guided to a desired destination across heterogeneous and unprepared settings. To achieve this, the system applies Computer Vision (CV)—namely an integrated Structure from Motion (SfM) pipeline—for tracking the user and exploits Building Information Modelling (BIM) semantics for planning the reference path to reach the destination. Audio Augmented Reality (AAR) technology is adopted for directional guidance delivery due to its intuitive and non-intrusive nature, which allows seamless integration with traditional mobility aids (e.g., white canes or guide dogs). The developed system was tested on a university campus to assess its performance during both path planning and navigation tasks, the latter involving users in both blindfolded and sighted conditions. Quantitative results indicate that the system computed paths in about 10 milliseconds and effectively guided blindfolded users to their destination, achieving performance comparable to that of sighted users. Remarkably, users in blindfolded conditions completed navigation tests with an average deviation from the reference path within the 0.60-meter shoulder width threshold in 100% of the trials, compared to 75% of the tests conducted by sighted users. These findings demonstrate the system’s accuracy in maintaining navigational alignment within acceptable human spatial tolerances. The proposed approach contributes to the advancement of BVIP assistive technologies by enabling scalable, infrastructure-free navigation across heterogeneous environments. Full article
Show Figures

Figure 1

17 pages, 472 KB  
Systematic Review
Embedding Digital Technologies (AI and ICT) into Physical Education: A Systematic Review of Innovations, Pedagogical Impact, and Challenges
by Dragoș Ioan Tohănean, Ana Maria Vulpe, Raluca Mijaica and Dan Iulian Alexe
Appl. Sci. 2025, 15(17), 9826; https://doi.org/10.3390/app15179826 (registering DOI) - 8 Sep 2025
Abstract
This systematic review investigates the integration of artificial intelligence (AI) and information and communication technologies (ICT) in physical education across all educational levels. Physical education is uniquely centered on motor skill development, physical activity engagement, and health promotion—outcomes that require tailored technological approaches. [...] Read more.
This systematic review investigates the integration of artificial intelligence (AI) and information and communication technologies (ICT) in physical education across all educational levels. Physical education is uniquely centered on motor skill development, physical activity engagement, and health promotion—outcomes that require tailored technological approaches. Through the analysis of recent empirical studies, the main areas where digital technologies contribute to pedagogical innovation are highlighted—such as personalized learning, real-time feedback, student motivation, and educational inclusion. The findings show that AI-assisted tools facilitate differentiated instruction and self-regulated learning by adapting to students’ individual performance levels. Technologies such as wearables and augmented reality (AR)/virtual reality (VR) systems increase engagement and support the participation of students with special educational needs. Furthermore, AI contributes to more efficient and objective assessment of motor performance, coordination, and movement quality. However, significant structural and ethical challenges persist, such as unequal access to digital infrastructure, lack of teacher training, and concerns related to personal data protection. Teachers’ perceptions reflect both openness to the educational potential of AI and caution regarding its practical implementation. The review concludes that AI and ICT can substantially transform physical education, provided that coherent policies, clear ethical frameworks, and investments in teachers’ professional development are in place. Full article
(This article belongs to the Special Issue Applications of Data Science and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 19940 KB  
Article
Augmented Reality in Review Processes for Building Authorities: A Case Study in Vienna
by Alexander Gerger, Harald Urban, Konstantin Höbart, Gabriel Pelikan and Christian Schranz
Buildings 2025, 15(17), 3228; https://doi.org/10.3390/buildings15173228 - 8 Sep 2025
Abstract
The digital transformation of the construction industry is still lagging due to its incomplete implementation throughout the entire building lifecycle. One stakeholder in particular has been largely overlooked thus far: public administration. This study explores the potential integration of augmented reality (AR) into [...] Read more.
The digital transformation of the construction industry is still lagging due to its incomplete implementation throughout the entire building lifecycle. One stakeholder in particular has been largely overlooked thus far: public administration. This study explores the potential integration of augmented reality (AR) into the processes of building authorities, with a particular focus on the review part of the permissions process, taking the City of Vienna as an example. As part of the EU-funded BRISE-Vienna project, an AR platform was developed and tested and an AR application was designed to enhance the transparency, stakeholder communication, and efficiency throughout the process. This study compares the proposed AR-based review process with the traditional plan-based approach, assessing both hard and soft factors. To this end, the durations of the individual process steps were measured, with a particular focus on the time spent by the officers (as a hard factor). In addition, qualitative surveys were conducted to gather the subjective impressions of the test participants (as soft factors). The key findings were a reduction in the officers’ workloads and an improvement in spatial understanding. While the overall review time remained similar, the use of AR reduced officers’ workload by over 40%. Additionally, the test participants stated that AR improved their spatial understanding and alleviated the time pressure within the process. This case study demonstrates the potential of AR in the permissions process and could serve as a model for other cities and countries. Full article
Show Figures

Figure 1

22 pages, 4991 KB  
Review
Meta-Optics for Optical Engineering of Next-Generation AR/VR Near-Eye Displays
by Junoh Lee and Sun-Je Kim
Micromachines 2025, 16(9), 1026; https://doi.org/10.3390/mi16091026 - 7 Sep 2025
Viewed by 94
Abstract
Meta-optics, enabled by metasurfaces consisting of two-dimensional arrays of meta-atoms, offers ultrathin and multi-functional control over the vectorial wavefront of light at subwavelength scales. The unprecedented optical element technology is a promising candidate to overcome key limitations in augmented reality (AR) and virtual [...] Read more.
Meta-optics, enabled by metasurfaces consisting of two-dimensional arrays of meta-atoms, offers ultrathin and multi-functional control over the vectorial wavefront of light at subwavelength scales. The unprecedented optical element technology is a promising candidate to overcome key limitations in augmented reality (AR) and virtual reality (VR) near-eye displays particularly in achieving compact, eyeglass-type form factors with a wide field-of-view, a large eyebox, high resolution, high brightness, and reduced optical aberrations, at the same time. This review highlights key performance bottlenecks of AR/VR displays in the perspective of optical design, with an emphasis on their practical significance for advancing current technologies. We then examine how meta-optical elements are applied to VR and AR systems by introducing and analyzing the major milestone studies. In case of AR systems, particularly, two different categories, free-space and waveguide-based architectures, are introduced. For each category, we summarize studies using metasurfaces as lenses, combiners, or waveguide couplers. While meta-optics enables unprecedented miniaturization and functionality, it also faces several remaining challenges. The authors suggest potential technological directions to address such issues. By surveying recent progress and design strategies, this review provides a comprehensive perspective on the role of meta-optics in advancing the optical engineering of next-generation AR/VR near-eye displays. Full article
(This article belongs to the Special Issue Advances in Nanophotonics: Physics, Materials, and Applications)
Show Figures

Figure 1

35 pages, 1510 KB  
Systematic Review
Augmented Reality in Education Through Collaborative Learning: A Systematic Literature Review
by Georgios Christoforos Kazlaris, Euclid Keramopoulos, Charalampos Bratsas and Georgios Kokkonis
Multimodal Technol. Interact. 2025, 9(9), 94; https://doi.org/10.3390/mti9090094 (registering DOI) - 6 Sep 2025
Viewed by 120
Abstract
The rapid advancement of technology in our era has brought significant changes to various fields of human activity, including education. As a key pillar of intellectual and social development, education integrates innovative tools to enrich learning experiences. One such tool is Augmented Reality [...] Read more.
The rapid advancement of technology in our era has brought significant changes to various fields of human activity, including education. As a key pillar of intellectual and social development, education integrates innovative tools to enrich learning experiences. One such tool is Augmented Reality (AR), which enables dynamic interaction between physical and digital environments. This systematic review, following PRISMA guidelines, examines AR’s use in education, with a focus on enhancing collaborative learning across various educational levels. A total of 29 peer-reviewed studies published between 2010 and 2024 were selected based on defined inclusion criteria, retrieved from major databases such as Scopus, Web of Science, IEEE Xplore, and ScienceDirect. The findings suggest that AR can improve student engagement and foster collaboration through interactive, immersive methods. However, the review also identifies methodological gaps in current research, such as inconsistent sample size reporting, limited information on questionnaires, and the absence of standardized evaluation approaches. This review contributes to the field by offering a structured synthesis of current research, highlighting critical gaps, and proposing directions for more rigorous, transparent, and pedagogically grounded studies on the integration of AR in collaborative learning environments. Full article
Show Figures

Figure 1

24 pages, 4050 KB  
Article
Maritime Operational Intelligence: AR-IoT Synergies for Energy Efficiency and Emissions Control
by Christos Spandonidis, Zafiris Tzioridis, Areti Petsa and Nikolaos Charanas
Sustainability 2025, 17(17), 7982; https://doi.org/10.3390/su17177982 - 4 Sep 2025
Viewed by 504
Abstract
In response to mounting regulatory and environmental pressures, the maritime sector must urgently improve energy efficiency and reduce greenhouse gas emissions. However, conventional operational interfaces often fail to deliver real-time, actionable insights needed for informed decision-making onboard. This work presents an innovative Augmented [...] Read more.
In response to mounting regulatory and environmental pressures, the maritime sector must urgently improve energy efficiency and reduce greenhouse gas emissions. However, conventional operational interfaces often fail to deliver real-time, actionable insights needed for informed decision-making onboard. This work presents an innovative Augmented Reality (AR) interface integrated with an established shipboard data collection system to enhance real-time monitoring and operational decision-making on commercial vessels. The baseline data acquisition infrastructure is currently installed on over 800 vessels across various ship types, providing a robust foundation for this development. To validate the AR interface’s feasibility and performance, a field trial was conducted on a representative dry bulk carrier. Through hands-free AR smart glasses, crew members access real-time overlays of key performance indicators, such as fuel consumption, engine status, emissions levels, and energy load balancing, directly within their field of view. Field evaluations and scenario-based workshops demonstrate significant gains in energy efficiency (up to 28% faster decision-making), predictive maintenance accuracy, and emissions awareness. The system addresses human–machine interaction challenges in high-pressure maritime settings, bridging the gap between complex sensor data and crew responsiveness. By contextualizing IoT data within the physical environment, the AR-IoT platform transforms traditional workflows into proactive, data-driven practices. This study contributes to the emerging paradigm of digitally enabled sustainable operations and offers practical insights for scaling AR-IoT solutions across global fleets. Findings suggest that such convergence of AR and IoT not only enhances vessel performance but also accelerates compliance with decarbonization targets set by the International Maritime Organization (IMO). Full article
Show Figures

Figure 1

22 pages, 9741 KB  
Article
Augminded: Ambient Mirror Display Notifications
by Timo Götzelmann, Pascal Karg and Mareike Müller
Multimodal Technol. Interact. 2025, 9(9), 93; https://doi.org/10.3390/mti9090093 - 4 Sep 2025
Viewed by 242
Abstract
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by [...] Read more.
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by the user. It enables technical and non-technical objects in the environment to be visually highlighted and thus subtly draw the attention of people passing by. The presented technology enables the provision of information that can be viewed in more detail by the user if required by slowing down their movement. Users can decide whether this is relevant to them or not. A prototype system was implemented and evaluated through a user study. The results show a high level of acceptance and intuitive usability of the system, with participants being able to reliably perceive and process the information displayed. The technology thus offers promising potential for the unobtrusive and context-sensitive provision of information in various application areas. The paper discusses limitations of the system and outlines future research directions to further optimize the technology and extend its applicability. Full article
Show Figures

Figure 1

16 pages, 3781 KB  
Systematic Review
Augmented Reality in Dental Extractions: Narrative Review and an AR-Guided Impacted Mandibular Third-Molar Case
by Gerardo Pellegrino, Carlo Barausse, Subhi Tayeb, Elisabetta Vignudelli, Martina Casaburi, Stefano Stradiotti, Fabrizio Ferretti, Laura Cercenelli, Emanuela Marcelli and Pietro Felice
Appl. Sci. 2025, 15(17), 9723; https://doi.org/10.3390/app15179723 - 4 Sep 2025
Viewed by 467
Abstract
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This [...] Read more.
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This study aims to narratively synthesise AR applications limited to dental extractions and to illustrate a full AR-guided clinical workflow. Methods: We performed a PRISMA-informed narrative search (PubMed + Cochrane, January 2015–June 2025) focused exclusively on AR applications in dental extractions and found nine eligible studies. Results: These pilot reports—covering impacted third molars, supernumerary incisors, canines, and cyst-associated teeth—all used marker-less registration on natural dental surfaces and achieved mean target-registration errors below 1 mm with headset set-up times under three minutes; the only translational series (six molars) recorded a mean surgical duration of 21 ± 6 min and a System Usability Scale score of 79. To translate these findings into practice, we describe a case of AR-guided mandibular third-molar extraction. A QR-referenced 3D-printed splint, intra-oral scan, and CBCT were fused to create a colour-coded hologram rendered on a Magic Leap 2 headset. The procedure took 19 min and required only a conservative osteotomy and accurate odontotomy that ended without neurosensory disturbance (VAS pain 2/10 at one week). Conclusions: Collectively, the literature synthesis and clinical demonstration suggest that current AR platforms deliver sub-millimetre accuracy, minimal workflow overhead, and high user acceptance in high-risk extractions while highlighting the need for larger, controlled trials to prove tangible patient benefit. Full article
Show Figures

Figure 1

20 pages, 5218 KB  
Article
A Robust Bilinear Framework for Real-Time Speech Separation and Dereverberation in Wearable Augmented Reality
by Alon Nemirovsky, Gal Itzhak and Israel Cohen
Sensors 2025, 25(17), 5484; https://doi.org/10.3390/s25175484 - 3 Sep 2025
Viewed by 517
Abstract
This paper presents a bilinear framework for real-time speech source separation and dereverberation tailored to wearable augmented reality devices operating in dynamic acoustic environments. Using the Speech Enhancement for Augmented Reality (SPEAR) Challenge dataset, we perform extensive validation with real-world recordings and review [...] Read more.
This paper presents a bilinear framework for real-time speech source separation and dereverberation tailored to wearable augmented reality devices operating in dynamic acoustic environments. Using the Speech Enhancement for Augmented Reality (SPEAR) Challenge dataset, we perform extensive validation with real-world recordings and review key algorithmic parameters, including the forgetting factor and regularization. To enhance robustness against direction-of-arrival (DOA) estimation errors caused by head movements and localization uncertainty, we propose a region-of-interest (ROI) beamformer that replaces conventional point-source steering. Additionally, we introduce a multi-constraint beamforming design capable of simultaneously preserving multiple sources or suppressing known undesired sources. Experimental results demonstrate that ROI-based steering significantly improves robustness to localization errors while maintaining effective noise and reverberation suppression. However, this comes at the cost of increased high-frequency leakage from both desired and undesired sources. The multi-constraint formulation further enhances source separation with a modest trade-off in noise reduction. The proposed integration of ROI and LCMP within the low-complexity frameworks, validated comprehensively on the SPEAR dataset, offers a practical and efficient solution for real-time audio enhancement in wearable augmented reality systems. Full article
(This article belongs to the Special Issue Sensors and Wearables for AR/VR Applications)
Show Figures

Figure 1

14 pages, 1266 KB  
Article
Distance Measurement Between a Camera and a Human Subject Using Statistically Determined Interpupillary Distance
by Marinel Costel Temneanu, Codrin Donciu and Elena Serea
AppliedMath 2025, 5(3), 118; https://doi.org/10.3390/appliedmath5030118 - 3 Sep 2025
Viewed by 291
Abstract
This paper presents a non-intrusive method for estimating the distance between a camera and a human subject using a monocular vision system and statistically derived interpupillary distance (IPD) values. The proposed approach eliminates the need for individual calibration by utilizing average IPD values [...] Read more.
This paper presents a non-intrusive method for estimating the distance between a camera and a human subject using a monocular vision system and statistically derived interpupillary distance (IPD) values. The proposed approach eliminates the need for individual calibration by utilizing average IPD values based on biological sex, enabling accurate, scalable distance estimation for diverse users. The algorithm, implemented in Python 3.12.11 using the MediaPipe Face Mesh framework, extracts pupil coordinates from facial images and calculates IPD in pixels. A sixth-degree polynomial calibration function, derived from controlled experiments using a uniaxial displacement system, maps pixel-based IPD to real-world distances across three intervals (20–80 cm, 80–160 cm, and 160–240 cm). Additionally, a geometric correction is applied to compensate for in-plane facial rotation. Experimental validation with 26 participants (15 males, 11 females) demonstrates the method’s robustness and accuracy, as confirmed by relative error analysis against ground truth measurements obtained with a Bosch GLM120C laser distance meter. Males exhibited lower relative errors across the intervals (3.87%, 4.75%, and 5.53%), while females recorded higher mean relative errors (6.0%, 6.7%, and 7.27%). The results confirm the feasibility of the proposed method for real-time applications in human–computer interaction, augmented reality, and camera-based proximity sensing. Full article
Show Figures

Figure 1

13 pages, 692 KB  
Entry
Metaverse Tourism: Opportunities, AI-Driven Marketing, and Ethical Challenges in Virtual Travel
by Dimitra Skandali
Encyclopedia 2025, 5(3), 135; https://doi.org/10.3390/encyclopedia5030135 - 2 Sep 2025
Viewed by 419
Definition
Metaverse tourism refers to the application of immersive digital technologies—such as virtual reality, augmented reality, and blockchain—within tourism experiences. It enables users to explore destinations, participate in cultural experiences, and interact socially within persistent, 3D virtual environments. While it offers new ways of [...] Read more.
Metaverse tourism refers to the application of immersive digital technologies—such as virtual reality, augmented reality, and blockchain—within tourism experiences. It enables users to explore destinations, participate in cultural experiences, and interact socially within persistent, 3D virtual environments. While it offers new ways of experiencing tourism beyond physical boundaries, it also introduces novel ethical, technological, and social dilemmas. This entry is written as an encyclopedia entry rather than a systematic review or empirical study. It is intended as a conceptual and integrative overview of current knowledge and debates, informed by peer-reviewed research, industry reports, and illustrative case examples. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

19 pages, 15830 KB  
Article
LARS: A Light-Augmented Reality System for Collective Robotic Interaction
by Mohsen Raoufi, Pawel Romanczuk and Heiko Hamann
Sensors 2025, 25(17), 5412; https://doi.org/10.3390/s25175412 - 2 Sep 2025
Viewed by 338
Abstract
Collective robotics systems hold great potential for future education and public engagement; however, only a few are utilized in these contexts. One reason is the lack of accessible tools to convey their complex, embodied interactions. In this work, we introduce the Light-Augmented Reality [...] Read more.
Collective robotics systems hold great potential for future education and public engagement; however, only a few are utilized in these contexts. One reason is the lack of accessible tools to convey their complex, embodied interactions. In this work, we introduce the Light-Augmented Reality System (LARS), an open-source, marker-free, cross-platform tool designed to support experimentation, education, and outreach in collective robotics. LARS employs Extended Reality (XR) to project dynamic visual objects into the physical environment. This enables indirect robot–robot communication through stigmergy while preserving the physical and sensing constraints of the real robots, and enhances robot–human interaction by making otherwise hidden information visible. The system is low-cost, easy to deploy, and platform-independent without requiring hardware modifications. By projecting visible information in real time, LARS facilitates reproducible experiments and bridges the gap between abstract collective dynamics and observable behavior. We demonstrate that LARS can serve both as a research tool and as a means to motivate students and the broader public to engage with collective robotics. Its accessibility and flexibility make it an effective platform for illustrating complex multi-robot interactions, promoting hands-on learning, and expanding public understanding of collective, embodied intelligence. Full article
Show Figures

Figure 1

16 pages, 28961 KB  
Article
Augmented Reality Glasses for Order Picking: A User Study Comparing Numeric Code, 2D-Map, and 3D-Map Visualizations
by Dario Gentile, Francesco Musolino, Mine Dastan and Michele Fiorentino
J 2025, 8(3), 32; https://doi.org/10.3390/j8030032 - 1 Sep 2025
Viewed by 441
Abstract
It has been shown that Augmented Reality improves the efficiency and well-being of order pickers; however, the adoption of AR Headsets in real contexts is hindered by comfort, safety, and battery duration issues. AR Glasses offer a lightweight alternative, yet they are seldom [...] Read more.
It has been shown that Augmented Reality improves the efficiency and well-being of order pickers; however, the adoption of AR Headsets in real contexts is hindered by comfort, safety, and battery duration issues. AR Glasses offer a lightweight alternative, yet they are seldom addressed in the current literature, and there is a lack of user studies exploring suitable visualization designs for these devices. Therefore, this research designs three AR visualizations of target position for order picking: Numeric Code, 2D Map, and 3D Map. They take into account the layout of the repository and the constraints of a small, low-resolution monocular display. These visualizations are tested in a within-subject user study with 30 participants employing AR Glasses in a simulated order-picking task. The Numeric Code visualization resulted in lower Task Completion Time (TCT) and error rates and was also rated as the least cognitively demanding and most preferred. This highlights that, for lightweight devices, simpler graphical interfaces tend to perform better. This study provides empirical insights for the design of innovative AR interfaces in logistics, using industry-relevant devices such as AR Glasses and conducting the evaluation in an extensive laboratory setup. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Graphical abstract

22 pages, 3866 KB  
Article
Development of a BIM-Based Metaverse Virtual World for Collaborative Architectural Design
by David Stephen Panya, Taehoon Kim, Soon Min Hong and Seungyeon Choo
Architecture 2025, 5(3), 71; https://doi.org/10.3390/architecture5030071 - 1 Sep 2025
Viewed by 293
Abstract
The rapid evolution of the metaverse is driving the development of new digital design tools that integrate Computer-Aided Design (CAD) and Building Information Modeling (BIM) technologies. Core technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) are increasingly combined [...] Read more.
The rapid evolution of the metaverse is driving the development of new digital design tools that integrate Computer-Aided Design (CAD) and Building Information Modeling (BIM) technologies. Core technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) are increasingly combined with BIM to enhance collaboration and innovation in design and construction workflows. However, current BIM–VR integration often remains limited to isolated tasks, lacking persistent, multi-user environments that support continuous project collaboration. This study proposes a BIM-based Virtual World (VW) framework that addresses these limitations by creating an immersive, real-time collaborative platform for the Architecture, Engineering, and Construction (AEC) industry. The system enables multi-user access to BIM data through avatars, supports direct interaction with 3D models and associated metadata, and maintains a persistent virtual environment that evolves alongside project development. Key functionalities include interactive design controls, real-time decision-making support, and integrated training capabilities. A prototype was developed using Unreal Engine and supporting technologies to validate the approach. The results demonstrate improved interdisciplinary collaboration, reduced information loss during design iteration, and enhanced stakeholder engagement. This research highlights the potential of BIM-based Virtual Worlds to transform AEC collaboration by fostering an open, scalable ecosystem that bridges immersive environments with data-driven design and construction processes. Full article
(This article belongs to the Special Issue Architecture in the Digital Age)
Show Figures

Figure 1

38 pages, 4536 KB  
Review
Emerging Technologies in Augmented Reality (AR) and Virtual Reality (VR) for Manufacturing Applications: A Comprehensive Review
by Nitol Saha, Victor Gadow and Ramy Harik
J. Manuf. Mater. Process. 2025, 9(9), 297; https://doi.org/10.3390/jmmp9090297 - 1 Sep 2025
Viewed by 838
Abstract
As manufacturing processes evolve towards greater automation and efficiency, the integration of augmented reality (AR) and virtual reality (VR) technologies has emerged as a transformative approach that offers innovative solutions to various challenges in manufacturing applications. This comprehensive review explores the recent technological [...] Read more.
As manufacturing processes evolve towards greater automation and efficiency, the integration of augmented reality (AR) and virtual reality (VR) technologies has emerged as a transformative approach that offers innovative solutions to various challenges in manufacturing applications. This comprehensive review explores the recent technological advancements and applications of AR and VR within the context of manufacturing. This review also encompasses the utilization of AR and VR technologies across different stages of the manufacturing process, including design, prototyping, assembly, training, maintenance, and quality control. Furthermore, this review highlights the recent developments in hardware and software components that have facilitated the adoption of AR and VR in manufacturing environments. This comprehensive literature review identifies the emerging technologies that are driving AR and VR technology toward technological maturity for implementation in manufacturing applications. Finally, this review discusses the major difficulties in implementing AR and VR technologies in the manufacturing sectors. Full article
(This article belongs to the Special Issue Smart Manufacturing in the Era of Industry 4.0, 2nd Edition)
Show Figures

Figure 1

Back to TopTop