Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,025)

Search Parameters:
Keywords = augmented reality (AR)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 684 KB  
Systematic Review
Artificial Intelligence (AI) in Construction Safety: A Systematic Literature Review
by Sharmin Jahan Badhan and Reihaneh Samsami
Buildings 2025, 15(22), 4084; https://doi.org/10.3390/buildings15224084 - 13 Nov 2025
Abstract
The construction industry remains among the most hazardous sectors globally, facing persistent safety challenges despite advancements in occupational health and safety OHS) measures. The objective of this study is to systematically analyze the use of Artificial Intelligence (AI) in construction safety management and [...] Read more.
The construction industry remains among the most hazardous sectors globally, facing persistent safety challenges despite advancements in occupational health and safety OHS) measures. The objective of this study is to systematically analyze the use of Artificial Intelligence (AI) in construction safety management and to identify the most effective techniques, data modalities, and validation practices. The method involved a systematic review of 122 peer-reviewed studies published between 2016 and 2025 and retrieved from major academic databases. The selected studies were classified by AI technologies including Machine Learning (ML), Deep Learning (DL), Computer Vision (CV), Natural Language Processing (NLP), and the Internet of Things (IoT), and by their applications in real-time hazard detection, predictive analytics, and automated compliance monitoring. The results show that DL and CV models, particularly Convolutional Neural Network (CNN) and You Only Look Once (YOLO)-based frameworks, are the most frequently implemented for personal protective equipment recognition and proximity monitoring, while ML approaches such as Support Vector Machines (SVM) and ensemble algorithms perform effectively on structured and sensor-based data. Major challenges identified include data quality, generalizability, interpretability, privacy, and integration with existing workflows. The paper concludes that explainable, scalable, and user-centric AI integrated with Building Information Modeling (BIM), Augmented Reality (AR) or Virtual Reality (VR), and wearable technologies is essential to enhance safety performance and achieve sustainable digital transformation in construction environments. Full article
Show Figures

Figure 1

20 pages, 4147 KB  
Article
An Augmented Reality Mobile App for Recognizing and Visualizing Museum Exhibits
by Madina Ipalakova, Zhiger Bolatov, Yevgeniya Daineko, Dana Tsoy, Damir Khojayev and Ekaterina Reznikova
Computers 2025, 14(11), 492; https://doi.org/10.3390/computers14110492 - 13 Nov 2025
Viewed by 93
Abstract
Augmented reality (AR) offers a novel way to enrich museum visits by deepening engagement and enhancing learning. This study presents the development of a mobile application for the Abylkhan Kasteyev State Museum of Arts (Almaty, Kazakhstan), designed to recognize and visualize exhibits through [...] Read more.
Augmented reality (AR) offers a novel way to enrich museum visits by deepening engagement and enhancing learning. This study presents the development of a mobile application for the Abylkhan Kasteyev State Museum of Arts (Almaty, Kazakhstan), designed to recognize and visualize exhibits through AR. Using computer vision and machine learning, the application identifies artifacts via a smartphone camera and overlays interactive 3D models in an augmented environment. The system architecture integrates Flutter plugins for AR rendering, YOLOv8 for exhibit recognition, and a cloud database for dynamic content updates. This combination enables an immersive educational experience, allowing visitors to interact with digital reconstructions and multimedia resources linked to the exhibits. Pilot testing in the museum demonstrated recognition accuracy above 97% and received positive feedback on usability and engagement. These results highlight the potential of AR-based mobile applications to increase accessibility to cultural heritage and enhance visitor interaction. Future work will focus on enlarging the exhibit database, refining performance, and incorporating additional interactive features such as multi-user collaboration, remote access, and gamified experiences. Full article
Show Figures

Figure 1

15 pages, 1364 KB  
Article
AT-TSVM: Improving Transmembrane Protein Inter-Helical Residue Contact Prediction Using Active Transfer Transductive Support Vector Machines
by Bander Almalki, Aman Sawhney and Li Liao
Int. J. Mol. Sci. 2025, 26(22), 10972; https://doi.org/10.3390/ijms262210972 - 12 Nov 2025
Viewed by 87
Abstract
Alpha helical transmembrane proteins are a specific type of membrane proteins that consist of helices spanning the entire cell membrane. They make up almost a third of all transmembrane (TM) proteins and play a significant role in various cellular activities. The structural prediction [...] Read more.
Alpha helical transmembrane proteins are a specific type of membrane proteins that consist of helices spanning the entire cell membrane. They make up almost a third of all transmembrane (TM) proteins and play a significant role in various cellular activities. The structural prediction of these proteins is crucial in understanding how they behave inside the cell and thus in identifying their functions. Despite their importance, only a small portion of TM proteins have had their structure determined experimentally. Inter-helical residue contact is one of the most successful computational approaches for reducing the TM protein fold search space and generating an acceptable 3D structure. Most current TM protein residue contact predictors use features extracted only from protein sequences to predict residue contacts. However, these features alone deliver a low-accuracy contact map and, as a result, a poor 3D structure. Although there are models that explore leveraging features extracted from protein 3D structures in order to produce a better representative contact model, such an approach remains theoretical, assuming the structure features are available, whereas in reality they are only available in the training data, but not in the test data, whose structure is what needs to be predicted. This presents a brand new transfer learning paradigm: training examples contain two sets of features, but test examples contain only one set of the less informative features. In this work, we propose a novel approach that can train a model with training examples that contain both sequence features and atomic features and apply the model on the test data that contain only sequence features but not atomic features, while still improving contact prediction rather than using sequence features alone. Specifically, our method, AT-TSVM, employs Active Transfer for Transductive Support Vector Machines, which is augmented with transfer, active learning and conventional transductive learning to enhance contact prediction accuracy. Results from a benchmark dataset show that our method can boost contact prediction accuracy by an average of 5 to 6% over the inductive classifier and 2.5 to 4% over the transductive classifier. Full article
(This article belongs to the Special Issue Membrane Proteins: Structure, Function, and Drug Discovery)
Show Figures

Figure 1

54 pages, 8629 KB  
Article
E-Commerce Meets Emerging Technologies: An Overview of Research Characteristics, Themes, and Trends
by Andra Sandu, Liviu-Adrian Cotfas, Corina Ioanăș, Irina-Daniela Cișmașu and Camelia Delcea
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 320; https://doi.org/10.3390/jtaer20040320 - 11 Nov 2025
Viewed by 490
Abstract
The rise of e-commerce platforms has completely revolutionized the way in which consumers interact with the market. In our digital world, due to the evolution of technology, people can purchase with ease the desired products, regardless of time and place, directly from their [...] Read more.
The rise of e-commerce platforms has completely revolutionized the way in which consumers interact with the market. In our digital world, due to the evolution of technology, people can purchase with ease the desired products, regardless of time and place, directly from their personal devices. This has led to a considerable improvement in users’ experiences, saving both time and money and avoiding stores’ congestions. At the same time, the emerging technologies, such as machine learning, artificial intelligence, augmented reality, and blockchain, registered a substantial contribution to optimizing e-commerce platforms by enhancing the efficiency of the processes, better understanding users’ needs, and offering personalized solutions. Therefore, the present bibliometric investigation aims to provide a comprehensive overview of the research domain-electronic commerce exploration using emerging technologies. Based on a dataset collected from the Web of Science database, the study reveals key details of the field, research characteristics, main themes, and current trends. Within the analysis, the R-tool—Biblioshiny 4.2.1—has been used for the creation of tables, graphs, and visual representations. The high importance of the domain, together with the significant interest within academics in publishing papers around this area, is validated by the value obtained for the annual growth rate, more specifically 44.65%, as well as by the cross-validation analyses performed in VOSviewer 1.6.20 and CiteSpace 6.3.R1, along with topic analysis performed through Latent Dirichlet Allocation and BERTopic. The results of this research represent precious information for the scientific community, authorities, and even companies that are oriented to e-commerce platforms, since crucial details about the market trends, domain’s impact, and key contributions are exposed. Full article
Show Figures

Figure 1

19 pages, 11078 KB  
Article
A Unified Framework for Cross-Domain Space Drone Pose Estimation Integrating Offline Domain Generalization with Online Domain Adaptation
by Yingjian Yu, Zhang Li and Qifeng Yu
Drones 2025, 9(11), 774; https://doi.org/10.3390/drones9110774 - 7 Nov 2025
Viewed by 305
Abstract
In this paper, we present a Unified Framework for cross-domain Space drone Pose Estimation (UF-SPE), addressing the simulation-to-reality gap that limits the deployment of deep learning models in real space missions. The proposed UF-SPE framework integrates offline domain generalization with online unsupervised domain [...] Read more.
In this paper, we present a Unified Framework for cross-domain Space drone Pose Estimation (UF-SPE), addressing the simulation-to-reality gap that limits the deployment of deep learning models in real space missions. The proposed UF-SPE framework integrates offline domain generalization with online unsupervised domain adaptation. During offline training, the model relies exclusively on synthetic images. It employs advanced augmentation techniques and a multi-task architecture equipped with Domain Shifting Uncertainty modules to improve the learning of domain-invariant features. In the online phase, normalization layers are fine-tuned using unlabeled real-world imagery via entropy minimization, allowing for the system to adapt to target domain distributions without manual labels. Experiments on the SPEED+ benchmark demonstrate that the UF-SPE achieves competitive accuracy with just 12.9 M parameters, outperforming the comparable lightweight baseline method by 37.5% in pose estimation accuracy. The results validate the framework’s efficacy and efficiency for robust cross-domain space drone pose estimation, indicating promise for applications such as on-orbit servicing, debris removal, and autonomous rendezvous. Full article
Show Figures

Figure 1

31 pages, 1407 KB  
Article
Performance Analysis of Unmanned Aerial Vehicle-Assisted and Federated Learning-Based 6G Cellular Vehicle-to-Everything Communication Networks
by Abhishek Gupta and Xavier Fernando
Drones 2025, 9(11), 771; https://doi.org/10.3390/drones9110771 - 7 Nov 2025
Viewed by 452
Abstract
The paradigm of cellular vehicle-to-everything (C-V2X) communications assisted by unmanned aerial vehicles (UAVs) is poised to revolutionize the future of sixth-generation (6G) intelligent transportation systems, as outlined by the international mobile telecommunication (IMT)-2030 vision. This integration of UAV-assisted C-V2X communications is set to [...] Read more.
The paradigm of cellular vehicle-to-everything (C-V2X) communications assisted by unmanned aerial vehicles (UAVs) is poised to revolutionize the future of sixth-generation (6G) intelligent transportation systems, as outlined by the international mobile telecommunication (IMT)-2030 vision. This integration of UAV-assisted C-V2X communications is set to enhance mobility and connectivity, creating a smarter and reliable autonomous transportation landscape. The UAV-assisted C-V2X networks enable hyper-reliable and low-latency vehicular communications for 6G applications including augmented reality, immersive reality and virtual reality, real-time holographic mapping support, and futuristic infotainment services. This paper presents a Markov chain model to study a third-generation partnership project (3GPP)-specified C-V2X network communicating with a flying UAV for task offloading in a Federated Learning (FL) environment. We evaluate the impact of various factors such as model update frequency, queue backlog, and UAV energy consumption on different types of communication latency. Additionally, we examine the end-to-end latency in the FL environment against the latency in conventional data offloading. This is achieved by considering cooperative perception messages (CPMs) that are triggered by random events and basic safety messages (BSMs) that are periodically transmitted. Simulation results demonstrate that optimizing the transmission intervals results in a lower average delay. Also, for both scenarios, the optimal policy aims to optimize the available UAV energy consumption, minimize the cumulative queuing backlog, and maximize the UAV’s available battery power utilization. We also find that the queuing delay can be controlled by adjusting the optimal policy and the value function in the relative value iteration (RVI). Moreover, the communication latency in an FL environment is comparable to that in the gross data offloading environment based on Kullback–Leibler (KL) divergence. Full article
(This article belongs to the Special Issue Advances in UAV Networks Towards 6G)
Show Figures

Figure 1

21 pages, 1186 KB  
Article
Reinforcement Learning-Driven Prosthetic Hand Actuation in a Virtual Environment Using Unity ML-Agents
by Christian Done, Jaden Palmer, Kayson Oakey, Atulan Gupta, Constantine Thiros, Janet Franklin and Marco P. Schoen
Virtual Worlds 2025, 4(4), 53; https://doi.org/10.3390/virtualworlds4040053 - 6 Nov 2025
Viewed by 186
Abstract
Modern myoelectric prostheses remain difficult to control, particularly during rehabilitation, leading to high abandonment rates in favor of static devices. This highlights the need for advanced controllers that can automate some motions. This study presents an end-to-end framework coupling deep reinforcement learning with [...] Read more.
Modern myoelectric prostheses remain difficult to control, particularly during rehabilitation, leading to high abandonment rates in favor of static devices. This highlights the need for advanced controllers that can automate some motions. This study presents an end-to-end framework coupling deep reinforcement learning with augmented reality (AR) for prosthetic actuation. A 14-degree-of-freedom hand was modeled in Blender and deployed in Unity. Two reinforcement learning agents were trained with distinct reward functions for a grasping task: (i) a discrete, Booleann reward with contact penalties and (ii) a continuous distance-based reward between joints and the target object. Each agent trained for 3 × 107 timesteps at 50 Hz. The Booleann reward function performed poorly by entropy and convergence metrics, while the continuous reward function achieved success. The trained agent using the continuous reward was integrated into a dynamic AR scene, where a user controlled the prosthesis via a myoelectric armband while the grasping motion was actuated automatically. This framework demonstrates potential for assisting patients by automating certain movements to reduce initial control difficulty and improve rehabilitation outcomes. Full article
Show Figures

Figure 1

13 pages, 366 KB  
Systematic Review
Application of Immersive Virtual Reality in the Training of Future Teachers: Scope and Challenges
by Carlos Arriagada-Hernández, José Pablo Fuenzalida De Ferrari, Lorena Jara-Tomckowiack, Felipe Caamaño-Navarrete and Gerardo Fuentes-Vilugrón
Virtual Worlds 2025, 4(4), 51; https://doi.org/10.3390/virtualworlds4040051 - 3 Nov 2025
Viewed by 524
Abstract
Introduction: The integration of Immersive Virtual Reality (IVR) into teacher education is a significant innovation that can enhance the learning and practical training of future teachers. IVR enables highly interactive, immersive experiences in simulated educational environments where student teachers confront realistic classroom challenges. [...] Read more.
Introduction: The integration of Immersive Virtual Reality (IVR) into teacher education is a significant innovation that can enhance the learning and practical training of future teachers. IVR enables highly interactive, immersive experiences in simulated educational environments where student teachers confront realistic classroom challenges. The objective was to synthesize how IVR is implemented in the training of future teachers and its level of effectiveness, in order to develop recommendations for practice and identify potential barriers to implementation. Method: A systematic review was carried out following the PRISMA model. A total of 1677 articles published in the Web of Science, Scopus, and SciELO databases were reviewed between 2021 and 2025, with 13 articles selected for analysis. Results: The reviewed articles highlight Immersive Virtual Reality (IVR) as a virtual tool that facilitates the training of future teachers. Among its most common applications are the use of virtual and augmented reality for conflict resolution, classroom management, and teacher adaptation. However, its implementation is limited by access to equipment, scenario development, and integration into university institutions. Conclusions: There is converging evidence that supports the strengths of using IVR as an emerging technology in teacher training, offering facilitating elements for the development of pedagogical competencies through the simulation of practical situations in a safe environment. Thus, this review summarizes recommendations for practice and warnings about implementation barriers, identifying the most potential uses and proposing actionable steps for its phased adoption in initial teacher training. Full article
Show Figures

Figure 1

22 pages, 3053 KB  
Article
Are We Ready for Synchronous Conceptual Modeling in Augmented Reality? A Usability Study on Causal Maps with HoloLens 2
by Anish Shrestha and Philippe J. Giabbanelli
Information 2025, 16(11), 952; https://doi.org/10.3390/info16110952 - 3 Nov 2025
Viewed by 424
Abstract
(1) Background: Participatory modeling requires combining individual views to create a shared conceptual model. While remote collaboration tools have enabled synchronous online modeling, they are limited to desktop settings. Augmented reality (AR) offers a new approach by potentially providing the sense of presence [...] Read more.
(1) Background: Participatory modeling requires combining individual views to create a shared conceptual model. While remote collaboration tools have enabled synchronous online modeling, they are limited to desktop settings. Augmented reality (AR) offers a new approach by potentially providing the sense of presence found in physical collaboration, which may better support participants in achieving the sense of presence found in physical locations, thus supporting them in negotiating meaning and building a shared model. (2) Methods: Building on prior works that developed technology, we performed a usability study with pairs of modelers to examine their ability at performing key conceptual modeling tasks (e.g., merging or deleting concepts) in AR. Our study pays particular attention to the time spent on these tasks and distinguishes how long it takes to perform the action (as enabled by the technology) from how long the participants discussed the action (e.g., to jointly decide whether a new concept should be created). (3) Results: Users completed every task and rated the usability from 3.68 (creating an edge) to 4.37 (finding a node) on a scale from 1 (very difficult) to 5 (very easy). (4) Conclusions: Low familiarity with AR and high time per task limits adoption for conceptual modeling. Full article
(This article belongs to the Special Issue Extended Reality and Its Applications)
Show Figures

Figure 1

14 pages, 318 KB  
Systematic Review
XR Technologies in Inclusive Education for Neurodivergent Children: A Systematic Review 2020–2024
by Bárbara Valverde Olivares, Loretto Muñoz Araya, José Luis Valín, Marcela Jarpa Azagra, Rocío Hidalgo Escobar, Isabel Cuevas Quezada and Cristóbal Galleguillos Ketterer
Children 2025, 12(11), 1474; https://doi.org/10.3390/children12111474 - 1 Nov 2025
Viewed by 291
Abstract
Background/Objectives: Extended reality (XR) technologies have been increasingly applied in inclusive education settings to assist neurodivergent children. However, the existing evidence remains fragmented across diverse contexts and disciplines. This systematic review synthesizes current research to identify the educational purposes, implementation characteristics, and [...] Read more.
Background/Objectives: Extended reality (XR) technologies have been increasingly applied in inclusive education settings to assist neurodivergent children. However, the existing evidence remains fragmented across diverse contexts and disciplines. This systematic review synthesizes current research to identify the educational purposes, implementation characteristics, and reported outcomes associated with the use of XR in inclusive educational environments. Methods: A comprehensive literature search was conducted in major academic databases using predefined keyword combinations related to XR, inclusive education, and neurodivergence. Peer-reviewed articles that applied XR tools in educational settings for neurodivergent children were screened against predefined inclusion and exclusion criteria. Data were extracted regarding study design, participant characteristics, XR modality, educational objectives, and outcome indicators. Results: The reviewed studies report heterogeneous applications of XR technologies, including virtual and augmented reality, to support cognitive, social, and behavioral skill development in neurodivergent learners. Most studies employed small sample sizes and quasi-experimental or exploratory designs. Although several studies reported improvements in engagement, communication skills, and task performance, outcome measures varied substantially and methodological rigor was limited in many cases. Conclusions: Current evidence suggests that XR technologies hold potential as complementary tools in inclusive education for neurodivergent children. Nonetheless, the heterogeneity of study designs and the lack of standardized assessment metrics limit the generalizability of the results. More robust empirical investigations are required to establish evidence-based guidelines for the implementation of XR in inclusive educational contexts. Full article
Show Figures

Figure 1

22 pages, 3158 KB  
Article
A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins
by Khang Quang Tran, Ernst L. Leiss, Nikolaos V. Tsekos and Jose Daniel Velazco-Garcia
Virtual Worlds 2025, 4(4), 50; https://doi.org/10.3390/virtualworlds4040050 - 1 Nov 2025
Viewed by 456
Abstract
Digital twins are increasingly utilized across all lifecycle stages of physical entities. Augmented reality (AR) offers real-time immersion into three-dimensional (3D) data, which provides an immersive experience with dynamic, high-quality, and multi-dimensional digital twins. A robust and customizable data platform is essential to [...] Read more.
Digital twins are increasingly utilized across all lifecycle stages of physical entities. Augmented reality (AR) offers real-time immersion into three-dimensional (3D) data, which provides an immersive experience with dynamic, high-quality, and multi-dimensional digital twins. A robust and customizable data platform is essential to create scalable 3D digital twins; Universal Scene Description (USD) provides these necessary qualities. Given the potential for integrating immersive AR and 3D digital twins, we developed a software application to bridge the gap between multi-modal AR immersion and USD-based digital twins. Our application provides real-time, multi-user AR immersion into USD-based digital twins, making it suitable for time-critical tasks and workflows. AR digital twin software is currently being tested and evaluated in an application we are developing to train astronauts. Our work demonstrates the feasibility of integrating immersive AR with dynamic 3D digital twins. AR-enabled digital twins have the potential to be adopted in various real-time, time-critical, multi-user, and multi-modal workflows. Full article
Show Figures

Figure 1

20 pages, 1226 KB  
Article
The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors
by Galina U. Soldatova, Svetlana V. Chigarkova and Svetlana N. Ilyukhina
Behav. Sci. 2025, 15(11), 1487; https://doi.org/10.3390/bs15111487 - 31 Oct 2025
Viewed by 444
Abstract
Industry 4.0 is steadily advancing a reality of deepening integration between humans and technology, a phenomenon aptly described by the metaphor of the “technologically augmented human”. This study identifies the digital and personal factors that predict a preference for the “digital centaur” strategy [...] Read more.
Industry 4.0 is steadily advancing a reality of deepening integration between humans and technology, a phenomenon aptly described by the metaphor of the “technologically augmented human”. This study identifies the digital and personal factors that predict a preference for the “digital centaur” strategy among adolescents and young adults. This strategy is defined as a model of human–AI collaboration designed to enhance personal capabilities. A sample of 1841 participants aged 14–39 completed measures assessing digital centaur preference and identification, emotional intelligence (EI), mindfulness, digital competence, technology attitudes, and AI usage, as well as AI-induced emotions and fears. The results indicate that 27.3% of respondents currently identify as digital centaurs, with an additional 41.3% aspiring to adopt this identity within the next decade. This aspiration was most prevalent among 18- to 23-year-olds. Hierarchical regression showed that interpersonal and intrapersonal EI and mindfulness are personal predictors of the digital centaur preference, while digital competence, technophilia, technopessimism (inversely), and daily internet use emerged as significant digital predictors. Notably, intrapersonal EI and mindfulness became non-significant when technology attitudes were included. Digital centaurs predominantly used AI functionally and reported positive emotions (curiosity, pleasure, trust, gratitude) but expressed concerns about human misuse of AI. These findings position the digital centaur as an adaptive and preadaptive strategy for the technologically augmented human. This has direct implications for education, highlighting the need to foster balanced human–AI collaboration. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

26 pages, 4427 KB  
Review
Digital Technology Integration in Risk Management of Human–Robot Collaboration Within Intelligent Construction—A Systematic Review and Future Research Directions
by Xingyuan Ding, Yinshuang Xu, Min Zheng, Weide Kang and Xiaer Xiahou
Systems 2025, 13(11), 974; https://doi.org/10.3390/systems13110974 - 31 Oct 2025
Viewed by 810
Abstract
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review [...] Read more.
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review of digital technology applications and risk management practices in HRC scenarios within intelligent construction environments. Following the PRISMA protocol, this study retrieved 7640 publications from the Web of Science database. After screening, 70 high-quality studies were selected for in-depth analysis. This review identifies four core digital technologies central to current HRC research: multi-modal acquisition technology, artificial intelligence learning technology (AI learning technology), Digital Twins (DTs), and Augmented Reality (AR). Based on the findings, this study constructed a systematic framework for digital technology in HRC, consisting of data acquisition and perception, data transmission and storage, intelligent analysis and decision support, human–machine interaction and collaboration, and intelligent equipment and automation. The study highlights core challenges across risk management stages, including difficulties in multi-modal fusion (risk identification), lack of quantitative systems (risk assessment), real-time performance issues (risk response), and weak feedback loops in risk monitoring and continuous improvement. Moreover, future research directions are proposed, including trust in HRC, privacy and ethics, and closed-loop optimization. This research provides theoretical insights and practical recommendations for advancing digital safety systems and supporting the safe digital transformation of the construction industry. These research findings hold significant important implications for advancing the digital transformation of the construction industry and enabling efficient risk management. Full article
Show Figures

Figure 1

17 pages, 3887 KB  
Article
Compact Design of a 50° Field of View Collimating Lens for Lightguide-Based Augmented Reality Glasses
by Wen-Shing Sun, Yi-Lun Su, Ying-Shun Hsu, Chuen-Lin Tien, Nai-Jen Cheng and Ching-Cherng Sun
Micromachines 2025, 16(11), 1234; https://doi.org/10.3390/mi16111234 - 30 Oct 2025
Viewed by 487
Abstract
Designing a compact collimating lens system for augmented reality (AR) applications presents significant optical challenges. This paper presents a compact, 50-degree field-of-view collimating lens system explicitly designed for lightguide-based AR glasses. The compact collimating lens is designed for a 0.32-inch microdisplay and consists [...] Read more.
Designing a compact collimating lens system for augmented reality (AR) applications presents significant optical challenges. This paper presents a compact, 50-degree field-of-view collimating lens system explicitly designed for lightguide-based AR glasses. The compact collimating lens is designed for a 0.32-inch microdisplay and consists of four plastic aspherical lenses. The optical design results in a collimating lens with a F-number of 2.17 and an entrance pupil diameter of 4 mm. Optical distortion is less than 0.29%, and the modulation transfer function (MTF) is greater than 0.23 at 250 cycles/mm. The overall lens diameter, including the lens barrel, measures 10.16 mm, while the lens length is 11.48 mm. The lens volume is 0.93 cm3, and its mass is 1.08 g. Compared to existing collimator designs, this approach significantly improves the trade-off between field of view, optical quality, and device miniaturization. The proposed design supports integration with 0.32-inch microdisplays, making it a practical and manufacturable solution for next-generation AR eyewear. This paper presents innovative contributions to the optical design of AR glasses, demonstrating considerable potential in reducing size and weight, and optimizing optical performance. Full article
(This article belongs to the Special Issue Photonic and Optoelectronic Devices and Systems, Third Edition)
Show Figures

Figure 1

14 pages, 482 KB  
Article
Targeting Cognition and Behavior Post-Stroke: Combined Emotional Music Stimulation and Virtual Attention Training in a Quasi-Randomized Study
by Rosaria De Luca, Federica Impellizzeri, Francesco Corallo, Andrea Calderone, Rosalia Calapai, Alessio Mirabile, Lilla Bonanno, Maria Grazia Maggio, Angelo Quartarone, Irene Ciancarelli and Rocco Salvatore Calabrò
Brain Sci. 2025, 15(11), 1168; https://doi.org/10.3390/brainsci15111168 - 29 Oct 2025
Viewed by 409
Abstract
Background: Emotionally salient music may enhance attention-focused rehabilitation, yet concurrent music plus virtual-reality programs in chronic stroke are largely untested. We assessed whether personalized emotional music stimulation (EMS) layered onto a standardized virtual reality rehabilitation system (VRRS) augments cognitive, affective, physiological, and [...] Read more.
Background: Emotionally salient music may enhance attention-focused rehabilitation, yet concurrent music plus virtual-reality programs in chronic stroke are largely untested. We assessed whether personalized emotional music stimulation (EMS) layered onto a standardized virtual reality rehabilitation system (VRRS) augments cognitive, affective, physiological, and functional outcomes. Methods: In a quasi-randomized outpatient trial, 20 adults ≥ 6 months post-ischemic stroke were allocated by order of recruitment to VRRS alone (control, n = 10) or VRRS+EMS (experimental, n = 10). Both groups performed 45 min of active VRRS cognitive training (3×/week, 8 weeks), while the EMS group received approximately 60 min sessions including setup and feedback phases. Primary outcomes were cognition and global function; secondary outcomes were intrinsic motivation, depression, anxiety, and heart rate. Non-parametric tests with effect sizes and Δ-scores were used. Results: The experimental group improved across all domains: cognition (median +4.5 points), motivation (median +54 points), depression (median −3.5 points), anxiety (median −4.0 points), heart rate (median −6.35 beats per minute), and disability (median one-grade improvement), each with large effects. The control group showed smaller gains in cognition and motivation and a modest heart-rate reduction, without significant changes in mood or disability. At post-treatment, the music group outperformed controls on cognition, motivation, and disability. Change-score analyses favored the music group for every endpoint. Larger heart-rate reductions correlated with greater improvements in depression (ρ = 0.73, p < 0.001) and anxiety (ρ = 0.58, p = 0.007). Conclusions: Adding personalized emotional music to virtual-reality attention training produced coherent, clinically relevant gains in cognition, mood, motivation, autonomic regulation, and independence compared with virtual reality alone. Full article
Show Figures

Figure 1

Back to TopTop