Next Article in Journal
Iterative Finite Element Analysis of Buccolingual Width on Canine Distal Movement in Clear Aligner Treatment
Previous Article in Journal
Effects of Lithology Combination Compaction Seepage Characteristics on Groundwater Prevention and Control in Shallow Coal Seam Group Mining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025)

1
School of Design Arts, Xiamen University of Technology, Xiamen 361024, China
2
Department of Human-Centered AI, Sangmyung University, Seoul 03016, Republic of Korea
3
Institute for Advanced Intelligence Study, Daejeon 34189, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(12), 6945; https://doi.org/10.3390/app15126945
Submission received: 22 May 2025 / Revised: 15 June 2025 / Accepted: 18 June 2025 / Published: 19 June 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

This study presents a systematic bibliometric review of wearable technologies aimed at vulnerable road user (VRU) safety, covering publications from 2000 to 2025. Guided by PRISMA procedures and a PICo-based search strategy, 58 records were extracted and analyzed in CiteSpace, yielding visualizations of collaboration networks, publication trajectories, and intellectual structures. The results indicate a clear evolution from single-purpose, stand-alone devices to integrated ecosystem solutions that address the needs of diverse VRU groups. Six dominant knowledge clusters emerged—street-crossing assistance, obstacle avoidance, human–computer interaction, cyclist safety, blind navigation, and smart glasses. Comparative analysis across pedestrians, cyclists and motorcyclists, and persons with disabilities shows three parallel transitions: single- to multisensory interfaces, reactive to predictive systems, and isolated devices to V2X-enabled ecosystems. Contemporary research emphasizes context-adaptive interfaces, seamless V2X integration, and user-centered design, and future work should focus on lightweight communication protocols, adaptive sensory algorithms, and personalized safety profiles. The review provides a consolidated knowledge map to inform researchers, practitioners, and policy-makers striving for inclusive and proactive road safety solutions.

1. Introduction

In the field of road transportation, VRUs are defined as road users who are “outside the car,” “most at risk in traffic,” and “unprotected by an outside shield,” distinguished from other road users by their lack of protective vehicle structures [1]. In contrast, vehicle occupants benefit from structural safety features such as airbags, seat belts, and reinforced frames that significantly reduce injury severity in collisions. VRUs can be systematically categorized into (1) mode-based categories, including pedestrians and users of two-wheeled vehicles, such as bicycles and motorcycles [2]; and (2) demographic-based subcategories that exhibit heightened vulnerability, particularly children, elderly persons, and people with disabilities [1]. While all road users face some traffic risk, the “vulnerable” designation specifically identifies those with disproportionately higher injury and fatality rates due to their physical exposure in traffic environments. VRUs such as pedestrians, cyclists, motorcyclists, and people with disabilities face disproportionate safety risks. Despite constituting a significant portion of global road traffic fatalities, with the World Health Organization (WHO) reporting over 1.19 million annual road traffic deaths worldwide [2], these vulnerable groups often lack adequate protection in increasingly complex traffic environments. Traditional safety approaches have primarily focused on vehicle-centric solutions [3] or static infrastructure improvements [4]. Although these methods are valuable, they do not fully address the dynamic mobility of VRUs. Moreover, they overlook the unique perceptual challenges VRUs face in complex traffic environments.
Recent advances in wearable computing [5], sensing technologies [6], and artificial intelligence [7] have opened new possibilities for enhancing VRU safety. These technological innovations enable direct enhancement of users’ perception of surrounding dangers through personalized, portable solutions. Wearable devices such as smart helmets, glasses, vests, and augmented reality displays can provide real-time environmental information and hazard warnings to vulnerable users navigating complex traffic scenarios, exemplified by computer vision-based smart glasses [7] and wearable alert systems integrating multiple sensors [8]. Compared to traditional passive protection measures, these technologies represent a paradigm shift in VRU safety, not only actively predicting potential hazards but also providing customized protection while maintaining users’ mobility freedom. Despite significant research progress, there remains a lack of systematic understanding of the field’s knowledge structure, evolutionary patterns, and future directions.
Given this background, our study differs from previous reviews that focused solely on specific VRU groups’ safety [9,10] or particular applications of wearable technologies [11,12] by employing a systematic bibliometric approach to construct a knowledge map of this interdisciplinary field and reveal its evolutionary patterns [13]. Considering the significant social value of these technologies in reducing road fatalities and enhancing mobility experiences for vulnerable groups, we followed the PRISMA 2020 guidelines [14] for systematic reviews and utilized CiteSpace software (version 6.4.R1 Advanced, Drexel University, Philadelphia, PA, USA) as our primary visualization tool [15] to examine publication trends, international collaboration networks, research hotspots, and knowledge evolution trajectories from 2000 to 2025. This multidimensional analysis not only presents how VRU safety wearable technologies have evolved across different user groups, technological approaches, and application scenarios but also identifies key trends and opportunities for future research.
To guide our systematic investigation, we formulated three specific research questions:
RQ1: What are the overall publication trends in wearable technology research focused on vulnerable road user safety from 2000 to 2025?
RQ2: How have research topics and technological focuses in the field of wearable technologies for enhancing vulnerable road user safety evolved in distinct phases over time?
RQ3: What are the future research trends in the field of wearable technologies for enhancing vulnerable road user safety?
The remainder of this paper is structured as follows: Section 2 introduces the systematic literature retrieval strategy, inclusion criteria, and application of the CiteSpace bibliometric analysis tool. Section 3 presents the publication trends of wearable technology research for VRU safety (addressing RQ1), international collaboration networks, and knowledge domain cluster analysis. Section 4 reviews the evolutionary characteristics and application trends of wearable safety technologies for different vulnerable road user groups (pedestrians, two-wheeler users, and visually impaired persons). Section 5 explores the phase-wise evolution of research themes and technological focus (addressing RQ2) as well as future directions and opportunities (addressing RQ3). Finally, the Section 6 summarizes the paradigm shift from passive sensing to active prediction systems in this field and the key research implications.

2. Materials and Methods

2.1. Data Sources

Our research follows the PRISMA 2020 guidelines for systematic review reporting [14]. The PRISMA guidelines aim to enhance the quality, clarity, and transparency of systematic reviews [16] and have become a recognized standard in this field. To ensure methodological rigor, we employed an online tool based on the PRISMA 2020 R package to structure our research workflow. This tool generates flow diagrams that comply with the latest PRISMA statement requirements, thereby enhancing the transparency and reproducibility of the research process [17]. As shown in Figure 1, this systematic review follows three core stages: Identification, Screening, and Inclusion. Each stage was executed according to predefined criteria, ensuring the reliability and validity of the review results.
The Web of Science (WoS) Core Collection served as the primary data reservoir for this CiteSpace study, with data collected from specific subsets of Clarivate Analytics’ database, including SCI-EXPANDED, SSCI, A&HCI, CPCI-S, and ESCI. This source provides one of the oldest and most comprehensive citation indices, spanning more than 9000 peer-reviewed journals across the social sciences, engineering, biomedical sciences, and arts and humanities [13]. WoS is typically considered the optimal data source for bibliometric research [18], ensuring scientific validity and representativeness of the research findings.

2.2. Search Strategy

A scientifically rigorous literature search strategy is the cornerstone of any systematic review [19], typically requiring a structured framework to ensure comprehensive and accurate retrieval [20]. In our systematic review, we employed the PICo framework (Population, phenomenon of Interest, and Context) to guide our search strategy development. This framework is widely recognized in qualitative evidence synthesis and interdisciplinary systematic reviews, effectively breaking down complex research questions into searchable core elements [21,22]. In our application, the Population (P) component focuses on VRUs–including pedestrians, cyclists, motorcyclists, scooter riders, and persons with disabilities (e.g., visually or hearing impaired individuals; wheelchair users). The Phenomenon of Interest (I) centers on wearable technology applications, encompassing general terms like “wearable,” specific smart safety gear (helmets, vests, glasses, etc.), and various feedback modalities (haptic/tactile, visual, auditory, and multimodal combinations). The Context (Co) pertains to road traffic safety improvement scenarios, incorporating keywords related to traffic safety, collision avoidance and warnings, accident prevention, hazard detection, and navigation in road environments.
Following the systematic review methodology, a comprehensive literature search was conducted in the WoS database using a meticulously constructed search strategy (see Table 1). The search query combined VRU-related terms with wearable/feedback-related terms using Boolean operators that maximize recall while maintaining focus on our research question. We utilized the NEAR/3 proximity operator to identify records where connected search terms appear within three words of each other–an approach necessary after our preliminary search revealed that using only “wearable*” would miss relevant studies that employed wearable technology without explicitly using this terminology. Our expanded query incorporated specific wearable application technologies and their feedback modalities (haptic, visual, auditory, and multimodal), following a systematic review of best practices to improve recall and reduce potential bias in the retrieved literature [23].

2.3. Inclusion Criteria

Our research strictly adheres to the PRISMA guidelines (Figure 1). For literature retrieval, we employed the Topic (TS) field rather than limiting it to Title (TI), which searches the title, abstract, keyword plus, and author keywords, ensuring a comprehensive capture of terminology related to VRUs, wearable technology, and traffic safety appearing in titles, abstracts, and keywords. We avoided using the ALL field to maintain compatibility with the NEAR/3 proximity operator, thereby optimizing retrieval precision.
The literature search was completed on 4 May 2025, covering publications from 2000 to 2025. After excluding the non-English literature, review articles, and conference abstracts, we retrieved 462 records. During the title and abstract screening phase, we excluded 386 publications irrelevant to our research focus, allowing 76 publications to proceed to the full-text assessment stage. Based on predefined inclusion and exclusion criteria, we further excluded 18 non-compliant publications during the full-text evaluation. Ultimately, 58 publications that fully met the research requirements were included in our final analysis.
Papers were included in this review if they met the following criteria:
  • Coverage of all three core elements of the PICo framework: focusing on VRUs, involving wearable technology applications, and targeting traffic safety;
  • Written in English.
Papers were excluded if they had any of the following characteristics:
  • Absence of traffic hazard scenarios (e.g., wearable technology only used for indoor applications);
  • Literature review article.

2.4. Analysis Tool

This study employs CiteSpace version 6.4.R1 Advanced for literature visualization analysis. As a scientific bibliometric tool, CiteSpace effectively identifies and visualizes knowledge structures, development patterns, and evolutionary trends in specific disciplinary domains based on co-citation analysis theory and the Pathfinder network scaling algorithm [15,24,25]. This tool has demonstrated value in wearable technology research, including applications in construction safety [11,26] and smart wearable assistive devices for the visually impaired [12]. The comprehensive analytical workflow implemented in this research is illustrated in Figure 2, which systematically outlines the six-stage process from the initial literature search through bibliometric analysis. This methodological framework integrates multiple specialized tools, including Web of Science Core Collection, advanced text processing software, systematic review frameworks, spreadsheet analysis tools, and bibliometric visualization platforms.
In the knowledge maps generated by CiteSpace, circular nodes represent research entities (such as countries, journals, or keywords), with node size reflecting influence or frequency of occurrence; connections between nodes indicate relationships between entities, with different colors corresponding to publication years. This design intuitively presents the complex associations within academic networks, clearly displaying research hotspot distribution, collaboration network strength, and disciplinary temporal evolution. Through node color gradients revealing temporal development trajectories, boundary characteristics distinguishing mainstream from emerging directions, and connection thickness indicating relationship strength, a comprehensive scientific knowledge landscape of the research field is constructed.

3. Results

This study employs scientific knowledge mapping methods to conduct a multidimensional visualization analysis of literature networks, comprehensively revealing the evolution of knowledge structures in the field of wearable technology for VRU traffic safety. We constructed three different types of knowledge networks, using “Country,” “Cited Journal,” and “Keyword” as nodes, with a time span precisely covering 2000–2025. In our analytical design, we adopted the following strategies: First, we set time slices at one-year precision to ensure capturing subtle trajectories of field development; second, we applied the Pathfinder algorithm combined with the “Pruning the merged network” mode to optimize and simplify the networks, effectively eliminating redundant connections while preserving the most representative knowledge transmission paths; third, we quantitatively evaluated network connection strength through the “Cosine” similarity algorithm and adopted the “g-index” (K = 25) as the node selection criterion to ensure the analysis focused on core elements with significant academic influence. In the subsequent analyses, the “Year” column represents the first appearance year of each entity (country, journal, or keyword) in this research domain, while the “Centrality” value measures betweenness centrality in the co-occurrence network, indicating significant bridging influence between different research clusters. In the visualization presentation, nodes are identified using chronological color gradients, intuitively displaying the evolution of research hotspots and knowledge flow paths and systematically presenting the academic development and global collaboration patterns in this field over 25 years.

3.1. Publication Volume Trends (2000–2025)

Based on statistics from the WoS database, this study systematically presents the academic publication dynamics of wearable technology in the field of VRU traffic safety. In response to RQ1, our research reveals that academic publications on wearable technology in VRU traffic safety have experienced three distinct developmental phases (Figure 3): the initial exploratory phase (2000–2012), characterized by sporadic and intermittent research output with an average of less than one publication per year; the emergent phase (2013–2017), showing small-scale but steady growth with an annual output of one–three papers; and the rapid growth phase (2018–2025), with significantly increased research density and annual output stabilizing at five–seven papers.
The significant increase in publications after 2018 marks the field’s entry into an accelerated development stage, attributable to two key factors: first, the increasing maturity and market penetration of smart wearable device technologies; second, a series of policy initiatives and practical guidelines for VRU safety issued by the WHO [27], which effectively raised global awareness and research investment in VRU traffic safety issues. Although research output experienced minor fluctuations after reaching a peak in 2021 (7 publications), it has generally maintained a high level, indicating that this research direction has formed a stable and active academic ecosystem. Notably, the 2025 data had already reached three publications by May, suggesting that the full-year publication volume is likely to continue or even exceed the output scale of previous years. This sustained growth trend in research activities indicates that as wearable technologies continue to deeply integrate with cutting-edge technologies, such as artificial intelligence and the Internet of Things, the field of VRU traffic safety—an area critical to life protection and social welfare—will continue to attract academic attention and innovative investment.

3.2. Geographic Distribution of Research Output

Table 2 presents the global distribution of contributions in the “VRUs safety wearable technology” research field, systematically recording each country’s publication volume, centrality index, and year of first participation. The United States ranks first with 16 publications and a high centrality index of 0.32, establishing its leading position in the field since 2000. Despite entering the research field as late as 2016, China (People’s Republic of China, including Taiwan region data, labeled as “PEOPLES R CHINA” in Figure 4) has rapidly risen to become the second-largest contributing country with 11 publications, demonstrating strong research momentum and ability to integrate into the global network. Germany, as the main research force in the European region, has accumulated eight publications since its participation in 2013. South Korea and Austria, although joining the field relatively late in 2017 and 2020, respectively, have already contributed five and four publications, reflecting these countries’ growing research investment in VRU safety wearable technology. India’s contribution is particularly significant; despite having only four publications, its centrality index of 0.08 indicates its research occupies an important strategic position in the knowledge network. The participation of Japan, France, Australia, and Malaysia has enriched the international research landscape in this field. Notably, although the United Kingdom is not listed in the top 10, its three constituent regions (England, Scotland, and Northern Ireland) have collectively contributed three research publications, beginning their participation in this field in 2019 and 2021, respectively.
Figure 4 presents a global collaboration network map of VRU safety wearable technology research, precisely capturing the dynamic patterns of transnational academic exchange. This visualization intuitively displays the structural relationships and collaboration intensity of the international research community through multidimensional indicators such as node size, color intensity, and connection thickness. The United States is positioned at the network’s core, with the largest node volume and a vibrant gradient coloration, highlighting its central position and radiating influence as a leader in this field. China, as a significant research force, features a substantial node size with prominent red intensity, while its notable connection with South Korea highlights the substantial collaborative relationship established between the two countries in this field, forming an East Asian regional research alliance. Multiple Eurasian countries form secondary research clusters around the core, with Germany, Austria, India, France, and Japan constituting key pillars of the global knowledge network. UK research appears as three independent geographical units in the network, each developing unique international cooperation pathways. The network periphery consists of the UAE, Australia, and multiple European and South American countries, forming a complete puzzle of the global research ecosystem. The node color spectrum from purple to red maps the chronological sequence of countries joining this field, while the distribution density and thickness of connections precisely quantify the frequency and depth of academic exchanges. The overall network forms a global academic community extending from North America as its hub to the Eurasian continent and Oceania, demonstrating the internationalized development trajectory and diverse collaborative ecology of VRU safety wearable technology research.

3.3. Journal and Conference Distribution

Table 3 presents the 15 most influential journals and conferences in the field of wearable technology research for VRU safety, providing a comprehensive overview of academic communication channels through citation counts, centrality metrics, and first publication years. Lecture Notes in Computer Science leads with 17 citations, demonstrating the foundational role of computer science in this interdisciplinary field since its first related publication in 2014. Following closely is Accident Analysis and Prevention, with 13 citations and a centrality of 0.27, indicating its emergence since 2016 as an important bridge connecting different research subdomains. Notably, centrality metrics reveal that some journals (despite not being outstanding in citation frequency) occupy key positions in the academic network. Human Factors (centrality 0.33), Foundation of Orientation and Mobility (centrality 0.36), and Ergonomics (centrality 0.32), published in 2007, 2000, and 2007, respectively, show high centrality values, suggesting these early publications provided theoretical foundations and methodological support for the entire research field, continuously influencing the direction of subsequent research.
The diversity of publication types reflects the interdisciplinary nature of the field, ranging from computer science (Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Access) to traffic safety (Transportation Research Part F: Traffic Psychology and Behaviour) and human–computer interaction (International Journal of Human-Computer Studies), demonstrating the comprehensive approach needed to integrate knowledge from multiple domains for addressing VRU safety issues. In terms of temporal distribution, these publications span from 2000 to 2023, with recent additions such as IEEE Access in 2023, indicating the field’s continued attraction of new research interest and resource investment. Particularly noteworthy is the inclusion of multiple IEEE series conferences and journals, highlighting the significant contribution of engineering and technological perspectives in advancing wearable safety device development, while the preprint platform arXiv ranking eighth reflects researchers’ increasing emphasis on open science and rapid knowledge sharing. This systematic analysis of major publication channels not only clearly presents the academic communication network structure of VRU safety wearable technology research but also deeply reveals the complete evolutionary trajectory of this field from early theoretical exploration to current multi-faceted practical applications.

3.4. Keyword Co-Occurrence and Clustering

3.4.1. Keyword Co-Occurrence Network and Evolutionary Pattern

For the keyword selection strategy, we employed a dual-criteria approach considering both frequency and centrality metrics. Initially, we arranged keywords in descending frequency order and selected the top 26 high-frequency terms. While conducting our qualitative assessment, we noted that “connected vehicle communication” demonstrated a centrality value of 0.00 despite its occurrence frequency of 5, suggesting limited direct connections within the established knowledge network. However, we retained this keyword in our analysis, as it represents an emerging technological approach with a potential future impact on VRU safety despite its current peripheral position in the knowledge structure. The selected keywords (Table 4) collectively represent both established research hotspots and evolving conceptual frameworks within the field.
The co-occurrence keyword network reveals the knowledge structure and research hotspots in wearable technology for VRU safety. As shown in Figure 5 and Table 4, the network presents clear knowledge distribution characteristics and thematic associations. Nodes such as “visually impaired,” “assistive technology,” and “wearable technology” occupy important positions in the network, with their larger node volumes reflecting higher occurrence frequencies, indicating these concepts have received widespread attention in this research field. Several interconnected research themes can be observed in the map: visual-impairment-assistance-related topics, including “visually impaired,” “assistive technology,” and “electronic travel aids”; safety-protection-related topics such as “traffic safety,” “pedestrian safety,” and “cyclist safety”; technology-implementation-related topics, including “wearable technology,” “sensor,” and various reality technologies (“virtual reality,” “augmented reality”); as well as functional-mechanism-related topics like “warning alerts,” “collision avoidance,” and “obstacle avoidance.” The connecting lines in the map demonstrate the close associations between concepts, with high centrality nodes such as “performance” (centrality = 0.30) and “collision avoidance” (centrality = 0.25) playing key roles in connecting different research directions. The network structure reflects the evolution of research in this field from initial assistive technologies for visually impaired people to a comprehensive safety technology system for multiple categories of VRUs.
Figure 6 displays the 14 keywords with the strongest citation burst in the field of wearable technology for VRU safety. This analysis reveals a clear three-stage evolution of research hotspots: the early stage (2002–2015) is dominated by foundational concepts, with “wearable technology” and “electronic travel aids” bursting simultaneously in 2014, primarily targeting basic navigation assistance for the visually impaired, while “navigation system” established itself as a continuous research direction throughout the entire period; the middle stage (2016–2020) exhibits transitional characteristics, as the burst cycle of “wearable technology” ended and “obstacle detection” rapidly gained attention, with research content diversifying into specific wearable forms such as “smart helmets” and “head-mounted display,” while the research population expanded to various vulnerable users, reflected in the sustained interest in keywords like “age differences,” “blind navigation,” and “pedestrian safety”; the recent stage (2021-2025) demonstrates a system integration trend, prioritizing “user experience,” with “augmented reality” serving as a platform for technology integration, while “navigation system”, despite being an early concept from 2002, experienced a significant resurgence with a citation burst beginning in 2023, forming a complementary cluster with emerging “obstacle avoidance” and “attention guidance” technologies, marking a significant evolution from isolated technological exploration to comprehensive safety solutions.

3.4.2. Knowledge Domain Clustering and Temporal Development

The keyword cluster analysis in this study reveals a Q value of 0.4938, which exceeds the assessment criterion of 0.3, indicating a well-structured clustering; meanwhile, the S value reaches 0.8611, significantly surpassing the threshold of 0.5, confirming the high confidence level of the clustering results. We employed the Log-Likelihood Ratio (LLR) method for cluster labeling [25], which evaluates statistical significance and conducts hypothesis testing based on rigorous threshold verification, ensuring statistical reliability and optimal confidence levels for the extracted cluster labels [26]. Notably, the silhouette scores of the document co-citation network all exceed 0.7 (Table 5), further validating the reliability of the clustering quality. As shown in Figure 7, the cluster analysis ultimately generated six cluster labels, numbered #0 through #5, with smaller-numbered clusters containing more keywords, reflecting the degree of aggregation of related research topics. In terms of label selection, considering that “street crossing” in cluster #0 better represents the core content of this research area than the initially auto-generated “mobility performance” (both having identical statistical indicators), we renamed cluster #0 as “street crossing” to enhance the representativeness and explanatory power of the label. Similarly, for cluster #2, we selected “virtual reality” over “human computer interaction” to maintain hierarchical consistency across cluster labels, as “virtual reality” provides a more specific and contextually relevant representation of the technological applications within this knowledge domain.
Cluster analysis reveals six core knowledge domains in wearable technology for VRU safety (Table 5 and Figure 7). The red “#0 street crossing” cluster represents closely related concepts around road crossing safety, with its prominent position highlighting this topic’s fundamental role in VRU safety research. The adjacent yellow “#1 obstacle avoidance” cluster integrates various sensory substitution technologies for obstacle detection and avoidance for visually impaired individuals and cyclists, demonstrating assistive technology’s critical value in enhancing safety autonomy for specific groups. The green “#2 virtual reality” cluster focuses on virtual reality applications and immersive technologies for enhancing safety perception among pedestrians, child cyclists, and visually impaired pedestrians. Notably, the light blue “#3 cyclist safety” cluster shows the integration of augmented reality with connected vehicle technologies, indicating cycling safety research is rapidly evolving toward intelligent systems. The dark blue “#4 blind navigation” cluster focuses on indoor and outdoor barrier-free navigation systems providing precise mobility assistance, highlighting technology’s potential to promote independent and safe mobility for visually impaired individuals. The pink “#5 smart glasses” cluster combines smart glasses with IoT and AI technologies, representing cutting-edge innovation in wearable technology for vulnerable pedestrian safety.
The timeline diagram reveals the evolutionary trajectory of wearable technology research in the VRU safety field (Figure 8). In the #0 street crossing cluster, the early combination of high-frequency core nodes “wearable technology” and “visually impaired” (2002–2005) established the foundation for road safety research for the visually impaired. With technological development, the emergence of “servo motor” and “warning alerts” in 2015 reflected a shift towards mechanical assistance and active warning systems, while “severity” and “vehicle impact” in 2020 indicated that the focus had expanded to accident severity analysis, forming a systematic evolution from basic assistance to precise prevention. The #1 obstacle avoidance cluster similarly began with basic research on “navigation system”, integrated “haptic feedback” and “collision avoidance” technologies in 2010, developed into “electronic travel aids” and “smart glasses” applications in 2015, and focused on “additive manufacturing” and “smart helmets” in 2020, demonstrating a transformation from single-function to diversified intelligent solutions. Notably, the multiple dense connections between clusters #0 and #1 reveal the close collaborative development relationship between road safety for the visually impaired and obstacle avoidance research.
The cross-cluster connections in Figure 8 reveal the disciplinary integration characteristics among key research hotspots. The medium-sized nodes in the #2 virtual reality cluster linking “virtual reality” with “pedestrians” and “cyclists” pioneered a new paradigm of immersive safety training; the #3 cyclist safety cluster, centered on the prominent node “cyclist safety”, evolved from the basic needs identification of “pedestrian safety” to the technological application of “head-mounted display” and “augmented reality”, with recent “attention guidance” and “multimodal interaction” further marking a key transition from passive protection to active perception enhancement; the high-frequency nodes “blind navigation” and “object detection” in the #4 blind navigation cluster evolving toward future “artificial intelligence” and “active system” further established intelligence as the dominant development direction for VRU safety technology. The overall timeline clearly demonstrates the evolution of research focus from early basic assistive tools to perception feedback technologies, then to the integration of virtual and augmented reality technologies, and finally projected to intelligent personalized systems, reflecting the co-deepening development process of VRU safety technology and practical needs, although the timeline labeling in the figure may need more precise calibration based on actual literature distribution.

4. Intelligent Wearable Solutions for VRU Safety: Trends, Challenges, and Opportunities

The full texts of 58 relevant articles were analyzed in this study, categorizing VRUs into three main groups: pedestrians, two-wheeler users, and visually impaired persons. These groups face significant safety risks in complex and dynamic road environments due to their lack of physical protection and lower visibility [28,29]. This section systematically analyzes the specialized wearable technology solutions and their applications for each group based on their unique characteristics and needs, summarizing how these technologies enhance road safety through improved environmental perception, increased visibility, and better awareness of traffic conditions. This analysis provides theoretical foundations and design references for developing more precise and effective protection systems for VRUs in future research.

4.1. Wearable Safety Solutions for Pedestrians

Eleven research papers have focused on utilizing wearable technology to enhance pedestrian traffic safety (Table 6), with three targeting elderly pedestrians [30,31,32] and two addressing mine workers [33,34]. Existing wearable devices in these studies can be categorized into haptic feedback devices, such as vibrotactile wristbands [30,31]; auditory feedback devices, including bone conduction headphones [31] and wearable headsets [35]; visual feedback devices, such as AR systems [31,36,37,38,39] and VR headsets [32]; smart protective equipment, including smart glasses [33] and helmets [34]; and wearables with integrated radar sensors [40].
In these studies, Cœugnet et al. conducted the earliest research targeting elderly pedestrians, developing a vibrotactile wristband in 2017 that significantly improved street-crossing safety for older adults [30], particularly reducing collision risks for older women in far lanes and with fast-approaching vehicles. This tactile system effectively addressed cognitive deficits in elderly people’s perception of vehicle gaps. Building on this initial experience, the research of Dommes et al. further compared vibration wristbands, bone conduction headphones, and AR visual feedback for elderly navigation [31], finding all sensory assistance methods superior to traditional paper maps. Among these, combined audiovisual feedback devices proved especially effective for the oldest elderly participants. Stafford et al. used head-mounted displays to conduct perception training for elderly individuals in virtual street-crossing tasks [32]. The research demonstrated that different auditory cues could effectively guide older adults to reallocate attention to specific or non-specific visual information during street-crossing decisions. This finding provided new directions for perceptual training in developing road safety interventions for the elderly.
Beyond research focused on older adults, wearable devices for pedestrian safety in specialized work environments have also received attention. Baek et al. developed a smart glasses system that accurately receives proximity warnings from heavy equipment or vehicles via Bluetooth beacons [33], delivering visual feedback to workers. The system effectively detects equipment at distances of at least 10 m regardless of pedestrian orientation, achieving a 100% alarm success rate in all 40 tests [33] while maintaining workers’ efficiency. Subsequently, Kim et al. from the same research team further developed a smart helmet system [34], also using Bluetooth beacons to receive vehicle approach information but providing visual warnings through LED light strips. Compared to smart glasses, smart helmets not only reduced workers’ cognitive burden and stress while freeing their hands to maintain work efficiency but also scored lower in calculated overall workload ratings, demonstrating superior user experience.
Meanwhile, research into wearable devices and augmented reality technology for pedestrian safety in ordinary road environments has also deepened. Tong et al. developed an AR warning system that provides collision risk information to pedestrians through wearable glasses [36]. The system displays vehicle paths, conflict points, and collision times in real time, guiding attention through elements such as sidebars and arrows, particularly suitable for pedestrians with obstructed views or low situational awareness, providing timely and intuitive warning information. The experiments of Wu et al. revealed the impact of matching distraction with warning modality [37], finding that when distraction and warnings use different sensory channels (visual and auditory), pedestrians exhibit more cautious street-crossing behavior, including reduced walking speed, expanded visual scanning range, and faster responses to vehicles. Furthermore, Tran et al. refined visual prompt research through VR simulation [38], comparing dynamic zebra crossings, green vehicle overlays, and their combinations, revealing that pedestrians feel safer with multiple visual prompts combined, though green vehicle overlays tend to be overlooked in multi-vehicle environments. Building on these theoretical research insights, Clérigo et al. advanced research to practical application stages [39], providing pedestrians with two types of visual feedback through AR headsets: arrow-indicated collision warnings and virtual traffic lights. The experimental results validated that these AR assistance methods significantly improved pedestrians’ sense of safety and reduced cognitive load, further confirming the value of AR technology in pedestrian crossing safety.
Beyond visual feedback, auditory feedback and radar sensing also have important applications in pedestrian safety. The research of Ito et al. on AirPods Pro-based auditory notification systems showed high acceptance of voice notifications during outdoor walking [35], though requiring intelligent identification of appropriate notification timing. This echoes the emphasis of Wu et al. on how sensory channel selection affects pedestrian behavior [37], jointly indicating the need for careful consideration of notification modality and timing when designing pedestrian safety systems. Wang et al. developed a wearable device based on radar sensors [40], providing 4–8 s of warning time through fuzzy evaluation and neural network algorithms, achieving 80% warning accuracy across 1000 scenarios and significantly improving the system’s adaptability and safety assurance capability under various environmental conditions [40].
Examining the evolution of pedestrian safety wearable technologies (Table 6 and Figure 9), the research focus has progressed from simple alerts through single sensory channels to intelligent warning systems with multisensory integration, aligning with the evolutionary trend from “pedestrian safety” toward “multimodal interaction” in the “#3 cyclist safety” cluster. These advanced solutions provide second-level warnings before pedestrian decision-making, dynamically allocate attention resources, and significantly reduce cognitive load, reflecting a critical shift from passive protection to active perception enhancement. Current research is advancing in two key directions: first, enhancing long-term wearability and user compliance through lightweight hardware design and personalized interaction mechanisms; second, validating both behavioral and physiological indicators in real-road environments or immersive mixed-reality settings, which resonates with the virtual reality application innovation methods demonstrated in the “#2 virtual reality” cluster, thereby establishing solid engineering foundations and human factors support for large-scale deployment of smart transportation systems.

4.2. Wearable Technologies for Two-Wheeler Users

Research on enhancing traffic safety for two-wheeler users through wearable technology encompasses 16 studies (Table 7). These studies address various types of two-wheeler users, including cyclists [41,42,43,44,45,46,47,48,49,50], scooter riders [51,52,53], and motorcyclists [54,55,56]. Existing research on wearable devices for two-wheeler users can be primarily categorized as smart helmets [41,44,45,46,47,51,54,55,56], smart glasses and augmented reality displays [49,50,53], smart clothing, such as jerseys and vests [43,48,50], wearable sensor systems [42], and specialized footwear [52].
With advancements in connected vehicle technology, research on cyclist safety systems has evolved from single-function to multi-function integration and from unimodal to multimodal interaction approaches. Matviienko et al. discovered that unimodal signals are suitable for directional cues, while multimodal signals are more effective for emergency situations, successfully reducing children’s reaction times and accident rates [41]. von Sawitzky et al. developed a helmet-mounted display system that warns against “dooring accidents” [44,46], proving that audiovisual cue combinations enhance cycling safety, with participants strongly preferring interactions incorporating auditory feedback [47]. Their subsequent research [49] emphasized that effective warning systems should be concise, intuitive, and display only critical hazard information. The systematic evaluation [50] of Ren et al. further validated this direction, finding that arrow visual cues were most efficient in unimodal interfaces, while visual-haptic combined feedback significantly improved user experience while reducing cognitive load.
To achieve these objectives, researchers have explored technical solutions for improving bicycle safety through two key dimensions: perception and interaction. In perception technology, Bieshaar et al. [42] and Bonilla et al. [43] focused on rider behavior recognition and physiological monitoring, respectively—the former achieving high-precision detection of starting movements through smart devices, while the latter developed an integrated solution combining physiological monitoring with light signaling systems. For interaction methods, the haptic feedback helmet [45] of Vo et al. demonstrated that head-based tactile feedback efficiently communicates directional and proximity information, while the smart vest of Mejia et al. [48] showcased the value of adaptive visual feedback in low-visibility conditions. These studies collectively indicate that cycling safety systems must flexibly select information modalities based on specific contexts—maintaining simplicity in ordinary situations while appropriately increasing multimodal redundancy in high-risk scenarios—while carefully avoiding information overload to ensure balanced attention allocation for cyclists. Overall, cycling safety research demonstrates a progression from purely technical validation toward user experience optimization, with research priorities gradually shifting from performance metrics to usability and user acceptance in real-world cycling environments, reflecting the field’s transition into a more mature application phase.
With the proliferation of electric micro-mobility, electric scooter safety technology has emerged as a research focus, with solutions incorporating features from both bicycle and motorcycle safety systems. Hung et al. designed a helmet system specifically for elderly riders that incorporates ultrasonic sensing technology [51], precisely detecting surrounding obstacles and providing visual warnings to prompt appropriate speed reduction in hazardous situations. Gupta et al. developed innovative pressure-sensitive insoles [52] that monitor rider balance in real time and provide intelligent path assistance, particularly beneficial for riders with varying proficiency levels navigating complex terrain. Matviienko et al. compared different warning mechanisms through an augmented reality environment [53], with the results indicating that combined auditory and AR visual warnings provide faster hazard reaction times and enhanced safety experiences, establishing a direction for future electric scooter safety technology development. The research progression reveals a gradual shift from simple sensing to multidimensional interactive warnings in electric scooter safety technology, with future advancements promising more intuitive and efficient safety systems through deep integration of sensing technologies and augmented reality.
Similar safety enhancement principles have been applied to motorcycle applications, though with greater emphasis on rider state monitoring and proactive hazard detection. Mohd Rasli et al. developed a system using pressure sensors to verify proper helmet wearing as a prerequisite for engine ignition [54] while incorporating speed warning functionality. Muthiah et al. advanced helmet intelligence by ingeniously utilizing three-axis accelerometers to sense head movements for controlling headlight direction [55], while incorporating drowsiness detection that automatically triggers alerts when riders show signs of fatigue. The system of Chang et al. further integrated license plate recognition technology, using infrared transceivers and image processing to identify approaching large vehicles in real time with 75% daytime and 70% nighttime recognition accuracy [56], providing timely voice warnings to help riders address blind spot hazards. Research developments indicate that motorcycle safety technology is rapidly evolving toward multisensor fusion and intelligent warning systems, with the focus expanding from purely passive safety constraints to comprehensive safety systems that include rider physiological state monitoring and active environmental hazard identification.
Examining the evolution of wearable safety technologies for two-wheeler users (Table 7 and Figure 10), the research focus has progressed from basic alerts with single functionalities to intelligent systems integrating multiple sensing and feedback capabilities, perfectly aligning with the rapid evolution of cycling safety research toward intelligent systems, as presented in the “#3 cyclist safety” cluster. These innovative solutions flexibly select information modalities according to specific scenarios—maintaining simplicity and intuitiveness in ordinary situations while appropriately increasing feedback redundancy in high-risk contexts—effectively balancing safety assurance with cognitive load. Current research is advancing in two key directions: first, shifting from purely technical validation toward greater emphasis on user experience optimization, focusing on usability and acceptance in practical application environments, reflecting the integration of technology from basic “cyclist safety” research toward “head-mounted display” and “connected vehicle communication”; second, through the integration of sensing technologies with intelligent algorithms, achieving a transformation from passive safety constraints to active hazard identification and warning, echoing the emerging research priorities of “attention guidance” and “multimodal interaction,” providing riders with more intuitive and efficient safety protection systems.

4.3. Wearable Assistive Devices for Visually Impaired Users

Visual impairment presents numerous challenges in daily activities, particularly in crossing streets, independent navigation, and obstacle avoidance. With the advancement of wearable technology, researchers have developed various assistive devices to enhance mobility safety and independence for visually impaired individuals. The literature review identifies 31 studies (Table 8 and Table 9), which can be categorized into smart glasses and head-mounted systems [6,29,57,58,59,60], tactile feedback and wearable vibration systems [8,61,62,63,64,65,66,67], chest/neck-mounted systems [5,68,69,70,71], multimodal sensing systems [72,73,74,75,76,77,78], and wearable computers with voice interfaces [79,80,81,82,83].
In the early development of assistive technologies for the visually impaired, wearable computer systems played a pioneering role. Ross et al. evaluated three interfaces—virtual sound beacons, digitized speech, and tactile tapping—in street crossing scenarios [61], finding that the combination of tactile cues and voice output most effectively helped visually impaired pedestrians cross streets safely. In their subsequent research, they optimized this system [79], confirming that shoulder tactile feedback performed best in body orientation mode, significantly reducing users’ deviation when crossing streets and establishing the theoretical foundation for multimodal interaction. Meanwhile, Helal et al. developed a navigation system capable of calculating optimal routes based on user preferences [80], providing real-time guidance through voice prompts, significantly enhancing the independent mobility of visually impaired individuals in complex campus environments. To address the challenge of recognizing navigation instructions in noisy environments, Wilson et al. innovatively employed non-speech sound beacons to replace traditional voice commands [72], effectively overcoming the limitations of voice instructions in noisy environments and improving system adaptability across various environmental conditions. However, regarding wearable technology applications, Tapu et al. conducted a comprehensive comparison of 12 different wearable technology solutions [70], indicating that no single system could fully meet all needs, and future development should focus on complementing rather than replacing the traditional white cane. Although these early systems pioneered the exploration of various feedback modes, they were limited by the processing capabilities and sensor technologies of the time, facing challenges such as large device size and short battery life in practical applications. These limitations prompted researchers to seek more lightweight solutions focused on specific needs, such as chest/neck-mounted systems for upper body protection.
Compared to computer-assisted systems, neck and chest-mounted devices offered an innovative approach, particularly addressing the issue of upper body obstacles that traditional white canes struggle to detect. Jameson et al. developed a head collision warning system using chest-mounted miniature ultrasonic sensors [68], capable of precisely detecting obstacles within a 1.5-m range, providing visually impaired individuals with sufficient reaction time. Based on a similar concept, Villamizar et al. designed a necklace-like ultrasonic system [69] that provided environmental awareness assistance through tactile feedback, effectively reducing upper body collisions during walking and significantly improving the safety of independent travel for visually impaired individuals. With advances in computer vision technology, the chest-mounted camera systems by Abobeah et al. [71] and Li et al. [75] not only detected obstacles but also identified traffic signals and pedestrian crossings, guiding users to safely cross intersections through voice instructions. Furthermore, the chest-mounted binocular camera system [5] of Asiedu Asante et al. further improved the accuracy and intelligence of obstacle detection, automatically prioritizing obstacles based on distance and movement status and providing users with more precise warning information. These designs focus on addressing the blind spots of white canes while not interfering with users’ ability to operate the white cane, showing a technological evolution from simple ultrasonic sensing to the integration of computer vision and intelligent processing. While these systems effectively addressed upper body protection issues, they still primarily relied on voice prompts for environmental information transmission, which had limitations in noisy environments, prompting researchers to explore alternative perception channels such as tactile feedback.
In the development of sensory feedback technologies, tactile feedback has become an essential component of navigation systems for the visually impaired due to its intuitiveness and reliability in noisy environments. Adame et al. developed a belt system integrating mobile phone vibration motors into wearable devices [62], conveying direction and distance information through vibrations of varying positions and intensities, which was simple to use and required almost no special training. Similarly, based on waist-worn concepts, the smart belt of Hung et al. utilized different frequencies from vibration motors on the left and right sides [67] to effectively distinguish between guidance path directions and obstacle positions. In terms of hand-based tactile feedback, the wrist-worn ultrasonic sensor system of Petsiuk et al. [63] enabled users to successfully complete complex navigation tasks after brief training, while the smart glove systems by Bhattacharya et al. [65] and Shen et al. [66] transmitted more detailed spatial information by stimulating different fingers, with tests showing users could identify obstacle positions with high accuracy and successfully avoid collisions. The dual wristband system by Pundlik et al. provided three-directional collision warnings [64], not only significantly reducing the number of collisions but also improving the rate of correct navigation decisions when directional information was available. Regarding feedback mode selection, a comparative study by Ren et al. [8] found that although visual feedback (LED light strips) had certain advantages in decision-making efficiency, most visually impaired users still preferred the comfort and intuitiveness of tactile feedback (vibration vests). These systems are particularly suitable for noisy environments and scenarios requiring auditory perception of environmental sounds, providing visually impaired individuals with a supplementary perception channel beyond hearing. Research trends indicate that this field is moving toward more refined vibration encoding, expressing richer spatial relationship information through varied vibration patterns. Despite tactile feedback’s excellent performance in noisy environments, its limited information transmission capacity makes it difficult to express complex scene details, a limitation that has driven the development of more comprehensive head-mounted systems.
Beyond tactile feedback systems, smart glasses and head-mounted systems have gradually become mainstream in assistive research for the visually impaired due to their convenient wear and natural field-of-view advantages, demonstrating a development trend from single feedback to multimodal integration. Mohamed Kassim et al. designed an electronic glasses system that provided obstacle warnings through single-ear headphones or wrist vibrators [57], enabling users to correctly identify obstacle positions in a short time and effectively enhancing environmental perception capabilities. In technical implementation, the glasses system of Kang et al. innovatively applied perspective projection geometry to measure grid shape changes [81], providing directional auditory feedback through Bluetooth headphones and achieving significant progress in the accuracy and processing speed of obstacle detection. Addressing the critical scenario of traffic safety, Cheng et al. developed specialized systems for crosswalk navigation [82] and intersection navigation [83], using head-mounted cameras and bone conduction headphones to identify crosswalks and traffic light status in complex environments, improving the crossing safety of visually impaired pedestrians. Son et al. conducted two in-depth studies based on this foundation [58,59], further enhancing the accuracy and fluency of visually impaired pedestrians in crosswalk navigation by optimizing traffic light recognition algorithms and navigation instruction systems. Recent research shows a trend toward multisensory integration, with Scalvini et al. combining 3D spatialized sound technology with helmet-mounted cameras [60], creating a more intuitive obstacle perception experience, and the smart glasses and smartphone combination system of Gao et al. integrated advanced computer vision and multisensor fusion technology, providing dual voice and tactile feedback and achieving a 100% obstacle avoidance rate in complex indoor and outdoor environment tests [6]. With the advantage of simulating natural visual perception processes, these systems are particularly suitable for complex street and traffic scenarios. Research trends indicate the field is moving toward multisensory integration and full-scenario adaptability, providing more intuitive and precise environmental understanding capabilities through the integration of advanced sensors and intelligent algorithms. However, even the most advanced single-type systems struggle to meet the needs of visually impaired users in all scenarios, a recognition that has prompted researchers to further explore multimodal systems that deeply integrate the advantages of different technologies.
From the perspective of overall technological evolution, multimodal systems embody the convergent development direction of assistive technologies for the visually impaired by integrating different types of sensors and feedback methods. Kumar et al. developed a system combining head-mounted cameras, ultrasonic sensors, and voice output [73], capable of providing comprehensive obstacle information and position navigation under various environmental conditions. The auditory display framework of Khan et al. achieved efficient obstacle recognition and scene understanding in outdoor environments through computer vision and multisensor data fusion [74]. Addressing the key challenge of crossing scenarios, Chang et al. designed an assistive system integrating smart glasses, waist-mounted devices, and smart canes [29], forming a complete information collection and feedback loop capable of real-time crosswalk recognition and guiding users to maintain the correct path. With the enhancement of mobile computing capabilities, Meliones et al. used smartphones as the core processing unit [76], working in conjunction with wearable cameras and servo motor-controlled ultrasonic sensors to achieve real-time obstacle detection and precise voice instruction generation. The AI-SenseVision system by Joshi et al. [78] provided more intelligent environmental perception capabilities through advanced object detection algorithms and ultrasonic sensing collaboration. In terms of sensor optimization, research by Flórez Berdasco et al. provided important design guidance for multimodal systems, finding that high-gain antennas performed better in medium-distance obstacle detection, while low-gain antennas were more effective in short-distance imaging applications [77], which is a finding with significant reference value for integrated systems that need to simultaneously satisfy different usage scenarios. By intelligently integrating multiple perception channels and feedback methods, these systems are particularly suitable for complex and variable urban environments and high-risk traffic scenarios. Research trends indicate that the field is evolving from simple technology superposition toward deep, intelligent integration, with the popularization of mobile computing platforms, such as smartphones, providing a solid foundation for practical applications.
Examining the evolution of wearable assistive technologies for visually impaired individuals (Table 8 and Table 9, and Figure 11), the research focus has progressed from early basic obstacle detection and simple path navigation to intelligent systems capable of processing complex street scenarios and precisely identifying traffic signals, highly consistent with the systematic evolution from basic research on “wearable technology” and “visually impaired” to “warning alerts” and “vehicle impact” in the “#0 street crossing” cluster. These technical solutions flexibly select perception modalities according to usage scenarios and environmental conditions—prioritizing tactile feedback in noisy environments while integrating auditory and multimodal feedback when complex information needs to be conveyed—effectively balancing information transmission with cognitive load. Current research is advancing in three key directions: first, shifting from technology-driven approaches toward deeper user-need orientation, designing differentiated solutions for totally blind, low-vision, and light-perception capable visually impaired groups, reflecting an evolution from basic assistance to precise prevention; second, through miniaturization design and intelligent algorithm optimization, improving device wearability and long-term use experience and reducing physical and cognitive burden on users; third, integrating wearable assistive devices with smart city infrastructure to enhance synergies among environmental perception, information processing, and user feedback, echoing the technological development path from “object detection” toward “artificial intelligence” and “active system” in the “#4 blind navigation” cluster and providing more comprehensive technical support for independent and safe mobility of visually impaired individuals.

5. Discussion

This study employed CiteSpace to conduct a bibliometric analysis of wearable technology research for VRU safety. To address RQ2 and RQ3, we have integrated our previous analyses of specific VRUs (pedestrians, two-wheeler users, and visually impaired pedestrians) to provide a more comprehensive understanding of the temporal evolution of research themes and technological focus areas for VRUs as a whole. This integrated perspective enables us to more effectively identify emerging patterns and potential future research directions, thereby contributing to the advancement of knowledge in this critical field.

5.1. Evolution of Research Themes and Technological Focus in VRU Wearable Safety Solutions

Our systematic review reveals distinct evolutionary patterns in VRU wearable safety technology research over the past 25 years (RQ2), characterized by significant shifts in both research focus and technological approaches. These shifts represent not merely incremental technical advancements but a fundamental transformation from technology-driven to human-centered research paradigms, marking the field’s maturation toward more holistic and contextually aware safety solutions.

5.1.1. Research Expansion from Specific User Groups to Diverse VRU Categories

Early research on VRU safety wearable technology primarily focused on the basic mobility assistance needs of visually impaired individuals, as reflected in the formation of the “#0 street crossing” cluster. This strategic choice not only addressed the most urgent safety needs but also fully utilized the technological conditions available at that time [61,79,80]. With technological advancements and the release of the WHO road safety guidelines [27], the research vision began to strategically expand to encompass a broader range of VRU groups, including motorcyclists [54], scooter users [51], pedestrians [30], and cyclists [41]. This evolution reflects a fundamental shift in road safety research philosophy—from specialized solutions targeting specific populations toward the development of inclusive traffic safety systems, demonstrating an enhanced global awareness of VRU safety challenges. Nevertheless, this apparent progress toward inclusivity reveals research gaps, particularly the limited representation of wheelchair users and hearing-impaired individuals in the identified clusters. This uneven research coverage highlights the ongoing challenges in achieving truly comprehensive VRU safety solutions. Future research should prioritize these under-represented VRU groups to ensure equitable safety technology development.

5.1.2. The Paradigm Shift from Passive Sensing to Active Prediction

The functional paradigm of VRU safety wearable technology has undergone a fundamental transformation from passive sensing to active prediction, an evolution clearly demonstrated in the progression from the “#1 obstacle avoidance” to the “#3 cyclist safety” clusters. Research has experienced three distinct phases: the early passive detection phase, represented by the basic obstacle recognition systems of Ross et al. [61] and Helal et al. [80]; the intermediate technical optimization phase, where systems like those by Jameson et al. [68] and Villamizar et al. [69] improved sensing precision but remained limited to reactive responses; and the critical turning point (2017–2020), when research began integrating contextual awareness and predictive capabilities, with Matviienko et al. [41] achieving differentiated perception of dangerous scenarios, while the [40] 4–8 s warning window of Wang et al. marked a key breakthrough from reactive responses to proactive prediction. This paradigm shift has expanded the safety protection model from post-event response to pre-event prevention, redefining the role of wearable technology in VRU safety. Recent systematic reviews in related domains consistently validate this technological evolution pattern, demonstrating parallel transformations from basic reactive safety equipment to intelligent predictive systems across pedestrian safety IoT frameworks [84], cyclist safety technologies [85], and visually impaired navigation devices [86].

5.1.3. Evolution from Standalone Devices to Integrated Ecosystem Approaches

The application pattern of VRU safety wearable technology has evolved from stand-alone units to interconnected ecological systems, a trend fully reflected in the “#3 cyclist safety” cluster. This evolution stems from the pursuit of comprehensive environmental perception—even the most advanced single device has its sensing range and accuracy constrained by physical conditions. Early research, such as the systems by Ross et al. [61] and Helal et al. [80], mainly operated as stand-alone units, with perception and feedback limited to the device’s own capabilities. As technology developed, research gradually shifted toward device interconnection and environmental information integration. Von Sawitzky et al. [44] first introduced V2X communication into cycling safety, enabling wearable devices to receive information beyond the user’s perception range. Bluetooth beacon systems by Baek et al. [33] and Kim et al. [34] further confirmed the safety value of wearable devices collaborating with environmental infrastructure. This collaborative evolution reached new heights in multi-component integrated systems. Chang et al. [29] constructed a complete information collection and feedback loop, while the IoT architecture of Kumar et al. [73] and the smartphone-centered solution of Meliones et al. [76] represented a shift toward more open interoperable systems. This technological evolution path from device-centered to ecosystem integration clearly demonstrates the maturity and systematization of VRU safety wearable technology research methods, reflecting the field’s deepening understanding of safety challenges in complex traffic environments and the diversification of response strategies.

5.2. Emerging Trends and Future Directions in Wearable Technology for VRU Safety

Our systematic review and scientific knowledge mapping reveal the future development trends of VRU safety wearable technology research (RQ3), characterized by a dual transformation from single-device intelligence to ecosystem collaboration and from static feedback to adaptive interaction. This evolution is reflected in the temporal changes of keyword co-occurrence networks, marking a paradigm shift from exploring technical feasibility to system integration and user experience optimization. We identify two core trends that will dominate future research: the deep integration of wearable technology with intelligent transportation infrastructure and the development of multisensory interaction with context-adaptive feedback.

5.2.1. Wearable Technology Integration with Connected Vehicles and Smart Infrastructure

The collaborative fusion of wearable devices with connected vehicles and smart infrastructure will emerge as the foremost development trend in VRU safety technology, a trajectory already evident in the evolution of the “#3 cyclist safety” and “#5 smart glasses” clusters. Future research will investigate how to position wearable technology as a central access point, deeply integrating with smart city infrastructure and V2X technology to construct a user-centered, multi-layered safety protection network. Existing studies such as those by Chang et al. [29] and von Sawitzky et al. [47] have preliminarily verified the feasibility of this integration, demonstrating how wearable devices can overcome their inherent perceptual limitations through information sharing. In the future, with the proliferation of 5G/6G communication technologies and enhanced edge computing capabilities, wearable devices will transform from current isolated sensing systems into critical nexus points connecting users and environments within intelligent transportation ecosystems, enabling pre-emptive hazard prediction and collaborative avoidance.
For future ecosystem integration research, we recommend focusing on developing lightweight communication protocols for vehicle-to-pedestrian communication that are specifically optimized for the power and computational constraints of wearable devices [87,88], allowing users to receive danger warnings from vehicles beyond their line of sight and fundamentally expanding the safety warning range; and also researching real-time information exchange standards between wearable devices and traffic infrastructure to enhance user perception in high-risk areas. These research efforts will accelerate the transformation of wearable technology from independent protective devices to organic components of intelligent transportation systems, providing more comprehensive and proactive safety protection for VRUs.

5.2.2. Developing Adaptive Multisensory Interfaces for Enhanced User Experience

Multisensory channel integration and context-adaptive interaction will reshape the user experience of VRU safety technology, a trend that has clearly emerged in the evolution from the “#2 virtual reality” to the “#3 cyclist safety” clusters. The literature analysis reveals that future designs will transcend single warning modes, shifting toward dynamic adaptive systems capable of sensing environmental conditions, user states, and risk levels. The research of Wu et al. on cross-sensory channel warnings and distraction-matching effects [37] and the evaluation of visual-tactile fusion systems by Ren et al. have confirmed the significant value of multichannel information integration in reducing cognitive load [50].
With advances in artificial intelligence technology, wearable safety systems will evolve from simple feedback modes to intelligent interaction platforms, achieving the precise transmission of danger-related information and efficient allocation of attention resources. For this user-centered design field, we recommend focusing on developing cognitive enhancement algorithms that intelligently adjust sensory channel combinations according to context and researching information encoding strategies adapted to the perceptual characteristics of different user groups. Such research will promote the transformation of wearable safety technology from generic warning devices to personalized cognitive assistance systems, creating safety experiences that meet the unique needs of various VRU groups and significantly improving the acceptability and effectiveness of safety technology.

5.3. Limitations and Recommendations

Although our systematic review successfully analyzed research hotspot evolution and future trends, several limitations must be acknowledged. Primarily, our exclusive reliance on WoS databases restricts the comprehensiveness of our findings. Future studies would benefit from expanding data collection to include Scopus, Google Scholar, and the relevant gray literature, ensuring these additional publications complement our results while broadening analytical perspectives, particularly in specialized technical domains. However, future researchers should carefully adapt search strategies when employing multi-database approaches due to varying syntax requirements across different platforms. Additionally, our sole dependence on CiteSpace (version 6.4.R1 Advanced) introduces methodological constraints, as the findings are inherently shaped by this tool’s specific algorithms and functionalities. More robust analyses could emerge from incorporating alternative bibliometric tools or newer versions in subsequent research, potentially yielding more nuanced insights into the field’s development trajectory and emerging research directions. Furthermore, future researchers could consider adopting shorter time windows (e.g., 10-year periods) to conduct more focused temporal analyses, which may reveal more granular patterns in research evolution and provide complementary insights to our comprehensive 25-year overview.
Additionally, our analysis reveals that current research predominantly focuses on three main VRU categories: pedestrians, two-wheeler users (including cyclists, motorcyclists, and scooter users), and visually impaired pedestrians. While these groups represent the most extensively studied populations in the existing literature, with some studies including age-diverse participants, the broader VRU community encompasses additional vulnerable groups that remain underexplored, including wheelchair users and individuals with hearing impairments or other sensory disabilities. Future research should expand the scope of VRU safety studies to investigate the specific safety technology needs of these under-represented groups, developing inclusive safety solutions that address the unique challenges faced by all VRU categories. Furthermore, subsequent studies should investigate the integration of advanced hardware technologies and their impact on user adoption and develop standardized experimental protocols and reporting guidelines to enhance reproducibility across different types of wearable safety technology studies, as these represent valuable research directions for future comprehensive reviews. This expanded focus would contribute to a more comprehensive understanding of VRU safety requirements and enable the development of universally accessible safety technologies.

6. Conclusions

This systematic review comprehensively explores the research landscape and development prospects of wearable technology for VRU safety, systematically examining its research progress, knowledge structure, research hotspots, and future trends. Through a structured retrieval strategy based on the PICo framework and a screening methodology following the PRISMA process, we ultimately included 58 relevant publications for in-depth analysis. Using the CiteSpace visualization tool, we conducted systematic analyses of publication trends, international research collaboration networks, core journal distribution, keyword co-occurrence and emergence patterns, knowledge cluster structures, and temporal evolution pathways.
The intellectual structure of wearable technology for VRU safety integrates the research literature, particularly six distinct knowledge clusters that collectively illustrate the field’s development, demonstrating a significant paradigm shift from passive sensing to active prediction systems serving increasingly diverse VRU populations. The most notable transformation has been the evolution from isolated devices providing simple obstacle warnings to integrated ecosystems capable of predicting potential accidents through multimodal feedback channels. Another significant aspect of this review is the proposition of two specific directions for future research, guiding researchers dedicated to related disciplines.

Author Contributions

Conceptualization, G.R., Z.H. and G.W.; methodology, G.R. and Z.H.; software, G.R. and Z.H.; validation, T.H. and J.H.L.; formal analysis, G.R. and Z.H.; investigation, G.R. and Z.H.; resources, G.W.; data curation, G.R. and Z.H.; writing—original draft preparation, G.R. and Z.H.; writing—review and editing, G.R., J.H.L. and G.W.; visualization, Z.H.; supervision, G.W.; project administration, G.R. and G.W.; funding acquisition, G.W. and J.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Korea Institute of Police Technology (KIPoT; Police Lab 2.0 program) grant funded by MSIT (RS-2023-00281194); the research grant (2024-0035) funded by HAII Corporation; the Fujian Province Social Science Foundation Project (No. FJ2025MGCA042); the 2024 Fujian Province Lifelong Education Quality Improvement Project (No. ZS24005); the Fujian Province Social Science Youth Foundation Project (No.: FJ2025C139); the Xiamen University of Technology High-level Talent Research Project (No. YSK24016R); the Education and Teaching Research Project of Xiamen University of Technology (No. JYCG202448) and the Virtual Reality System for Printing Materials and Technologies (2023CXY0425).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are contained within the manuscript. Raw data are available from the corresponding author upon request.

Acknowledgments

We appreciate the support by the funding agencies listed in the Funding section.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ARAugmented Reality
HMDHead-Mounted Display
IoTInternet of Things
V2XVehicle-to-Everything
VRVirtual Reality
VRUVulnerable Road User
WHOWorld Health Organization
WoSWeb of Science

References

  1. Khayesi, M. Vulnerable Road Users or Vulnerable Transport Planning? Front. Sustain. Cities 2020, 2, 25. [Google Scholar] [CrossRef]
  2. World Health Organization. Global Status Report on Road Safety 2023; World Health Organization: Geneva, Switzerland, 2023; p. ix. 81p. [Google Scholar]
  3. Reyes-Muñoz, A.; Guerrero-Ibáñez, J. Vulnerable Road Users and Connected Autonomous Vehicles Interaction: A Survey. Sensors 2022, 22, 4614. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, X.; Angulo, A.; Tavakoli, A.; Robartes, E.; Chen, T.D.; Heydarian, A. Rethinking Infrastructure Design: Evaluating Pedestrians and VRUs’ Psychophysiological and Behavioral Responses to Different Roadway Designs. Sci. Rep. 2023, 13, 4278. [Google Scholar] [CrossRef]
  5. Asiedu Asante, B.K.; Imamura, H. Towards Robust Obstacle Avoidance for the Visually Impaired Person Using Stereo Cameras. Technologies 2023, 11, 168. [Google Scholar] [CrossRef]
  6. Gao, Y.; Wu, D.; Song, J.; Zhang, X.; Hou, B.; Liu, H.; Liao, J.; Zhou, L. A Wearable Obstacle Avoidance Device for Visually Impaired Individuals with Cross-Modal Learning. Nat. Commun. 2025, 16, 2857. [Google Scholar] [CrossRef]
  7. Contreras-Castillo, J.; Zeadally, S.; Guerrero-Ibañez, J.; Santana-Mancilla, P.C.; Katib, I. Enabling Safe Co-Existence of Connected/Autonomous Cars and Road Users Using Machine Learning and Deep Learning Algorithms. Trans. Emerg. Telecommun. Technol. 2025, 36, e70103. [Google Scholar] [CrossRef]
  8. Ren, G.; Huang, Z.; Lin, W.; Huang, T.; Wang, G.; Lee, J.H. Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback. Appl. Sci. 2025, 15, 3942. [Google Scholar] [CrossRef]
  9. Lian, Y.; Liu, D.E.; Ji, W.Z. Survey and Analysis of the Current Status of Research in the Field of Outdoor Navigation for the Blind. Disabil. Rehabil. Assist. Technol. 2024, 19, 1657–1675. [Google Scholar] [CrossRef]
  10. Casanova, E.; Guffanti, D.; Hidalgo, L. Technological Advancements in Human Navigation for the Visually Impaired: A Systematic Review. Sensors 2025, 25, 2213. [Google Scholar] [CrossRef]
  11. Chen, H.; Mao, Y.; Xu, Y.; Wang, R. The Impact of Wearable Devices on the Construction Safety of Building Workers: A Systematic Review. Sustainability 2023, 15, 11165. [Google Scholar] [CrossRef]
  12. Zhang, X.; Huang, X.; Ding, Y.; Long, L.; Li, W.; Xu, X. Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review. Sensors 2024, 24, 7986. [Google Scholar] [CrossRef] [PubMed]
  13. Tang, Y.; Zhang, N.; Liu, S. A Bibliometric Analysis of Publications on the Ethical Considerations of Sex Robots (2003–2022). Humanit. Soc. Sci. Commun. 2025, 12, 188. [Google Scholar] [CrossRef]
  14. Page, M.J.; Moher, D.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews. BMJ 2021, 372, n160. [Google Scholar] [CrossRef]
  15. Chen, C. CiteSpace II: Detecting and Visualizing Emerging Trends and Transient Patterns in Scientific Literature. J. Am. Soc. Inf. Sci. Technol. 2006, 57, 359–377. [Google Scholar] [CrossRef]
  16. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.A.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. PLoS Med. 2009, 6, e1000100. [Google Scholar] [CrossRef]
  17. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R Package and Shiny App for Producing PRISMA 2020-Compliant Flow Diagrams, with Interactivity for Optimised Digital Transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
  18. van Leeuwen, T. The Application of Bibliometric Analyses in the Evaluation of Social Science Research. Who Benefits from It, and Why It Is Still Feasible. Scientometrics 2006, 66, 133–154. [Google Scholar] [CrossRef]
  19. Cooper, C.; Booth, A.; Varley-Campbell, J.; Britten, N.; Garside, R. Defining the Process to Literature Searching in Systematic Reviews: A Literature Review of Guidance and Supporting Studies. BMC Med. Res. Methodol. 2018, 18, 85. [Google Scholar] [CrossRef]
  20. Sayers, A. Tips and Tricks in Performing a Systematic Review. Br. J. Gen. Pract. 2008, 58, 136. [Google Scholar] [CrossRef]
  21. Stern, C.; Jordan, Z.; McArthur, A. Developing the Review Question and Inclusion Criteria. AJN Am. J. Nurs. 2014, 114, 53–56. [Google Scholar] [CrossRef]
  22. Zhang, B. Research Progress and Intellectual Structure of Design for Digital Equity (DDE): A Bibliometric Analysis Based on Citespace. Humanit. Soc. Sci. Commun. 2024, 11, 1019. [Google Scholar] [CrossRef]
  23. Lagisz, M.; Yang, Y.; Young, S.; Nakagawa, S. A Practical Guide to Evaluating Sensitivity of Literature Search Strings for Systematic Reviews Using Relative Recall. Res. Synth. Methods 2025, 16, 1–14. [Google Scholar] [CrossRef]
  24. Chen, C.; Song, I.Y.; Yuan, X.; Zhang, J. The Thematic and Citation Landscape of Data and Knowledge Engineering (1985–2007). Data Knowl. Eng. 2008, 67, 234–259. [Google Scholar] [CrossRef]
  25. Gong, X.; Yee, C.L.; Lee, S.Y.; Cao, E.Y.; Saif, A.N.M. Knowledge Mapping of Impulsive Buying Behavior Research: A Visual Analysis Using CiteSpace. Humanit. Soc. Sci. Commun. 2024, 11, 967. [Google Scholar] [CrossRef]
  26. Gao, R.; Mu, B.; Lyu, S.; Wang, H.; Yi, C. Review of the Application of Wearable Devices in Construction Safety: A Bibliometric Analysis from 2005 to 2021. Buildings 2022, 12, 344. [Google Scholar] [CrossRef]
  27. World Health Organization. Powered Two- and Three-Wheeler Safety: A Road Safety Manual for Decision-Makers and Practitioners; World Health Organization: Geneva, Switzerland, 2017. [Google Scholar]
  28. Yannis, G.; Nikolaou, D.; Laiou, A.; Stürmer, Y.A.; Buttler, I.; Jankowska-Karpa, D. Vulnerable Road Users: Cross-Cultural Perspectives on Performance and Attitudes. IATSS Res. 2020, 44, 220–229. [Google Scholar] [CrossRef]
  29. Chang, W.J.; Chen, L.B.; Sie, C.Y.; Yang, C.H. An Artificial Intelligence Edge Computing-Based Assistive System for Visually Impaired Pedestrian Safety at Zebra Crossings. IEEE Trans. Consum. Electron. 2021, 67, 3–11. [Google Scholar] [CrossRef]
  30. Cœugnet, S.; Dommes, A.; Panëels, S.; Chevalier, A.; Vienne, F.; Dang, N.T.; Anastassova, M. A Vibrotactile Wristband to Help Older Pedestrians Make Safer Street-Crossing Decisions. Accid. Anal. Prev. 2017, 109, 1–9. [Google Scholar] [CrossRef]
  31. Dommes, A. Helping Older Pedestrians Navigate in the City: Comparisons of Visual, Auditory and Haptic Guidance Instructions in a Virtual Environment. Behav. Inf. Technol. 2018, 38, 150–171. [Google Scholar]
  32. Stafford, J.; Rodger, M. Educating Older Adults’ Attention towards and Away from Gap-Specifying Information in a Virtual Road-Crossing Task. Ecol. Psychol. 2021, 33, 31–56. [Google Scholar] [CrossRef]
  33. Baek, J.; Choi, Y. Smart Glasses-Based Personnel Proximity Warning System for Improving Pedestrian Safety in Construction and Mining Sites. Int. J. Environ. Res. Public Health 2020, 17, 1422. [Google Scholar] [CrossRef] [PubMed]
  34. Kim, Y.; Baek, J.; Choi, Y. Smart Helmet-Based Personnel Proximity Warning System for Improving Underground Mine Safety. Appl. Sci. 2021, 11, 4342. [Google Scholar] [CrossRef]
  35. Ito, M.; Tsubouchi, K.; Nishio, N.; Shimosaka, M.; Taya, A.; Sezaki, K.; Nishiyama, Y. Investigating Acceptable Voice-Based Notification Timings through Earable Devices: A Preliminary Field Study. In Proceedings of the Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp’24, Melbourne, VIC, Australia, 5–9 October 2024; pp. 30–34. [Google Scholar] [CrossRef]
  36. Tong, Y.; Jia, B.; Bao, S. An Augmented Warning System for Pedestrians: User Interface Design and Algorithm Development. Appl. Sci. 2021, 11, 7197. [Google Scholar] [CrossRef]
  37. Wu, R.; Chen, H.T. The Effect of Visual and Auditory Modality Mismatching between Distraction and Warning on Pedestrian Street Crossing Behavior. In Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Sydney, NSW, Australia, 16–20 October 2023; pp. 1045–1054. [Google Scholar] [CrossRef]
  38. Tran, T.T.M.; Parker, C.; Hoggenmüller, M.; Hespanhol, L.; Tomitsch, M. Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban Interfaces. Multimodal Technol. Interact. 2023, 7, 21. [Google Scholar] [CrossRef]
  39. Clérigo, A.; Schrapel, M.; Teixeira, P.; Rito, P.; Sargento, S.; Vinel, A. SafeARCross: Augmented Reality Collision Warnings and Virtual Traffic Lights for Pedestrian Safety. In Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’24, Stanford, CA, USA, 22–25 September 2024; pp. 63–73. [Google Scholar] [CrossRef]
  40. Wang, Z.; Wan, Q.; Qin, Y.; Fan, S.; Xiao, Z. Intelligent Algorithm in a Smart Wearable Device for Predicting and Alerting in the Danger of Vehicle Collision. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 3841–3852. [Google Scholar] [CrossRef]
  41. Matviienko, A.; Ananthanarayan, S.; Borojeni, S.S.; Feld, Y.; Heuten, W.; Boll, S. Augmenting Bicycles and Helmets with Multimodal Warnings for Children. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI’18, Barcelona, Spain, 3–6 September 2018; pp. 1–13. [Google Scholar] [CrossRef]
  42. Bieshaar, M.; Depping, M.; Schneegans, J.; Sick, B. Starting Movement Detection of Cyclists Using Smart Devices. arXiv 2018, arXiv:1808.04449. [Google Scholar]
  43. Bonilla, M.; Córdova, P.; Jiménez, D.; Robayo, A. Monitoring System of Physiological Signals with Emergencies Management for the Road Safety of Cyclists. In Proceedings of the 2019 Sixth International Conference on Edemocracy & Egovernment (ICEDEG), Quito, Ecuador, 24–26 April 2019; pp. 274–279. [Google Scholar] [CrossRef]
  44. von Sawitzky, T.; Grauschopf, T.; Riener, A. No Need to Slow down! A Head-up Display Based Warning System for Cyclists for Safe Passage of Parked Vehicles. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’20, Virtual, 21–22 September 2020; pp. 1–3. [Google Scholar] [CrossRef]
  45. Vo, D.B.; Saari, J.; Brewster, S. TactiHelm: Tactile Feedback in a Cycling Helmet for Collision Avoidance. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA’21, Yokohama, Japan, 8–13 May 2021; pp. 1–5. [Google Scholar] [CrossRef]
  46. von Sawitzky, T.; Grauschopf, T.; Riener, A. “Attention! A Door Could Open.”—Introducing Awareness Messages for Cyclists to Safely Evade Potential Hazards. Multimodal Technol. Interact. 2022, 6, 3. [Google Scholar] [CrossRef]
  47. von Sawitzky, T.; Grauschopf, T.; Riener, A. Hazard Notifications for Cyclists: Comparison of Awareness Message Modalities in a Mixed Reality Study. In Proceedings of the 27th International Conference on Intelligent User Interfaces, IUI’22, Helsinki, Finland, 22–25 March 2022; pp. 310–322. [Google Scholar] [CrossRef]
  48. Mejia, D.; Gomez, S.; Martinez, F. A Low-Cost Wearable Autonomous System for the Protection of Bicycle Users. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2023, 14, 960–967. [Google Scholar] [CrossRef]
  49. von Sawitzky, T.; Wintersberger, P.; Grauschopf, T.; Riener, A. How to Indicate Multiple Potential Hazards to Cyclists on Smart Glasses? Findings from a Focus Group Discussion with Research Experts and UX Students. In Proceedings of the Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’24 Adjunct, Stanford, CA, USA, 22–25 September 2024; pp. 178–183. [Google Scholar] [CrossRef]
  50. Ren, G.; Huang, Z.; Lin, W.; Miao, N.; Huang, T.; Wang, G.; Lee, J.H. Multimodal Guidance for Enhancing Cyclist Road Awareness. Electronics 2025, 14, 1363. [Google Scholar] [CrossRef]
  51. Hung, Y.H.; Hsu, H.C.; Huang, Y.F. Design and Evaluation of an Innovative Hazard Warning Helmet for Elder Scooter Riders. In Universal Access in Human-Computer Interaction. Users and Context Diversity; Antona, M., Stephanidis, C., Eds.; Springer: Cham, Switzerland, 2016; pp. 367–374. [Google Scholar] [CrossRef]
  52. Gupta, D.; Xu, W.; Yu, X.; Huang, M.C. Campus Safety and the Internet of Wearable Things: Assessing Student Safety Conditions on Campus While Riding a Smart Scooter. In Proceedings of the 2021 IEEE 17th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Athens, Greece, 27–30 July 2021; pp. 1–4. [Google Scholar] [CrossRef]
  53. Matviienko, A.; Müller, F.; Schön, D.; Fayard, R.; Abaspur, S.; Li, Y.; Mühlhäuser, M. E-ScootAR: Exploring Unimodal Warnings for E-Scooter Riders in Augmented Reality. In Proceedings of the Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA’22, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar] [CrossRef]
  54. Mohd Rasli, M.K.A.; Madzhi, N.K.; Johari, J. Smart Helmet with Sensors for Accident Prevention. In Proceedings of the 2013 International Conference on Electrical, Electronics and System Engineering (ICEESE), Kuala Lumpur, Malaysia, 4–5 December 2013; pp. 21–26. [Google Scholar] [CrossRef]
  55. Muthiah, M.; Aswin Natesh, V.; Sathiendran, R.K. Smart Helmets for Automatic Control of Headlamps. In Proceedings of the 2015 International Conference on Smart Sensors and Systems (IC-SSS), Bangalore, India, 21–23 December 2015; pp. 1–4. [Google Scholar] [CrossRef]
  56. Chang, W.J.; Chen, L.B. Design and Implementation of an Intelligent Motorcycle Helmet for Large Vehicle Approach Intimation. IEEE Sens. J. 2019, 19, 3882–3892. [Google Scholar] [CrossRef]
  57. Mohamed Kassim, A.; Yasuno, T.; Suzuki, H.; Shahrieel Mohd Aras, M.; Izzuan Jaafar, H.; Azni Jafar, F.; Subramonian, S. Conceptual Design and Implementation of Electronic Spectacle Based Obstacle Detection for Visually Impaired Persons. J. Adv. Mech. Des. Syst. Manuf. 2016, 10, JAMDSM0094. [Google Scholar] [CrossRef]
  58. Son, H.; Krishnagiri, D.; Jeganathan, V.S.; Weiland, J. Crosswalk Guidance System for the Blind. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 3327–3330. [Google Scholar] [CrossRef]
  59. Son, H.; Weiland, J. Wearable System to Guide Crosswalk Navigation for People with Visual Impairment. Front. Electron. 2022, 2, 790081. [Google Scholar] [CrossRef]
  60. Scalvini, F.; Bordeau, C.; Ambard, M.; Migniot, C.; Dubois, J. Outdoor Navigation Assistive System Based on Robust and Real-Time Visual–Auditory Substitution Approach. Sensors 2024, 24, 166. [Google Scholar] [CrossRef] [PubMed]
  61. Ross, D.; Blasch, B. Evaluation of Orientation Interfaces for Wearable Computers. In Proceedings of the Digest of Papers. Fourth International Symposium on Wearable Computers, Atlanta, GA, USA, 16–17 October 2000; pp. 51–58. [Google Scholar] [CrossRef]
  62. Adame, M.R.; Yu, J.; Moller, K.; Seemann, E. A Wearable Navigation Aid for Blind People Using a Vibrotactile Information Transfer System. In Proceedings of the 2013 ICME International Conference on Complex Medical Engineering, Beijing, China, 25–28 May 2013; pp. 13–18. [Google Scholar] [CrossRef]
  63. Petsiuk, A.L.; Pearce, J.M. Low-Cost Open Source Ultrasound-Sensing Based Navigational Support for the Visually Impaired. Sensors 2019, 19, 3783. [Google Scholar] [CrossRef] [PubMed]
  64. Pundlik, S.; Tomasi, M.; Moharrer, M.; Bowers, A.R.; Luo, G. Preliminary Evaluation of a Wearable Camera-Based Collision Warning Device for Blind Individuals. Optom. Vis. Sci. 2018, 95, 747–756. [Google Scholar] [CrossRef] [PubMed]
  65. Bhattacharya, A.; Asari, V.K. Wearable Walking Aid System to Assist Visually Impaired Persons to Navigate Sidewalks. In Proceedings of the 2021 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 12–14 October 2021; pp. 1–7. [Google Scholar] [CrossRef]
  66. Shen, J.; Chen, Y.; Sawada, H. A Wearable Assistive Device for Blind Pedestrians Using Real-Time Object Detection and Tactile Presentation. Sensors 2022, 22, 4537. [Google Scholar] [CrossRef]
  67. Hung, C.H.; Chi, C.N.; Huang, S.Y.; Hsu, C.H.; Lu, Y.A. Vision Belt: Intelligent Assistive Device for Collision Avoidance by Using Computer Vision Technology and Ultrasonic Sensors. In Proceedings of the 2024 International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), Taichung, Taiwan, 9–11 July 2024; pp. 775–776. [Google Scholar] [CrossRef]
  68. Jameson, B.; Manduchi, R. Watch Your Head: A Wearable Collision Warning System for the Blind. In Proceedings of the 2010 IEEE Sensors, Waikoloa, HI, USA, 1–4 November 2010; pp. 1922–1927. [Google Scholar] [CrossRef]
  69. Villamizar, L.H.; Gualdron, M.; Gonzalez, F.; Aceros, J.; Rizzo-Sierra, C.V. A Necklace Sonar with Adjustable Scope Range for Assisting the Visually Impaired. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 1450–1453. [Google Scholar] [CrossRef]
  70. Tapu, R.; Mocanu, B.; Tapu, E. A Survey on Wearable Devices Used to Assist the Visual Impaired User Navigation in Outdoor Environments. In Proceedings of the 2014 11th International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 14–15 November 2014; pp. 1–4. [Google Scholar] [CrossRef]
  71. Abobeah, R.; Hussein, M.; Abdelwahab, M.; Shoukry, A. Wearable RGB Camera-Based Navigation System for the Visually Impaired. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Funchal, Madeira, Portugal, 27–29 January 2018; pp. 555–562. [Google Scholar] [CrossRef]
  72. Wilson, J.; Walker, B.N.; Lindsay, J.; Cambias, C.; Dellaert, F. SWAN: System for Wearable Audio Navigation. In Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers, Boston, MA, USA, 11–13 October 2007; pp. 91–98. [Google Scholar] [CrossRef]
  73. Kumar, N.A.; Haris Thangal, Y.; Sunitha Beevi, K. IoT Enabled Navigation System for Blind. In Proceedings of the 2019 IEEE R10 Humanitarian Technology Conference (R10-HTC)(47129), Depok, West Java, Indonesia, 12–14 November 2019; pp. 186–189. [Google Scholar] [CrossRef]
  74. Khan, W.; Hussain, A.; Khan, B.; Nawaz, R.; Baker, T. Novel Framework for Outdoor Mobility Assistance and Auditory Display for Visually Impaired People. In Proceedings of the 2019 12th International Conference on Developments in Esystems Engineering (DESE), Kazan, Russia, 7–10 October 2019; pp. 984–989. [Google Scholar] [CrossRef]
  75. Li, X.; Cui, H.; Rizzo, J.R.; Wong, E.; Fang, Y. Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. In Advances in Computer Vision; Arai, K., Kapoor, S., Eds.; Springer: Cham, Switzerland, 2020; pp. 132–146. [Google Scholar] [CrossRef]
  76. Meliones, A.; Filios, C.; Llorente, J. Reliable Ultrasonic Obstacle Recognition for Outdoor Blind Navigation. Technologies 2022, 10, 54. [Google Scholar] [CrossRef]
  77. Berdasco, A.F.; Laviada, J.; de Cos Gómez, M.E.; Las-Heras, F. Performance Evaluation of Millimeter-Wave Wearable Antennas for Electronic Travel Aid. IEEE Trans. Instrum. Meas. 2023, 72, 4507510. [Google Scholar] [CrossRef]
  78. Joshi, R.C.; Singh, N.; Sharma, A.K.; Burget, R.; Dutta, M.K. AI-SenseVision: A Low-Cost Artificial-Intelligence-Based Robust and Real-Time Assistance for Visually Impaired People. IEEE Trans.-Hum.-Mach. Syst. 2024, 54, 325–336. [Google Scholar] [CrossRef]
  79. Ross, D.A.; Blasch, B.B. Development of a Wearable Computer Orientation System. Pers. Ubiquitous Comput. 2002, 6, 49–63. [Google Scholar] [CrossRef]
  80. Helal, A.; Moore, S.; Ramachandran, B. Drishti: An Integrated Navigation System for Visually Impaired and Disabled. In Proceedings of the Fifth International Symposium on Wearable Computers, Zurich, Switzerland, 8–9 October 2001; pp. 149–156. [Google Scholar] [CrossRef]
  81. Kang, M.C.; Chae, S.H.; Sun, J.Y.; Lee, S.H.; Ko, S.J. An Enhanced Obstacle Avoidance Method for the Visually Impaired Using Deformable Grid. IEEE Trans. Consum. Electron. 2017, 63, 169–177. [Google Scholar] [CrossRef]
  82. Cheng, R.; Wang, K.; Yang, K.; Long, N.; Hu, W.; Chen, H.; Bai, J.; Liu, D. Crosswalk Navigation for People with Visual Impairments on a Wearable Device. J. Electron. Imaging 2017, 26, 53025. [Google Scholar] [CrossRef]
  83. Cheng, R.; Wang, K.; Lin, S. Intersection Navigation for People with Visual Impairment. In Computers Helping People with Special Needs; Miesenberger, K., Kouroupetroglou, G., Eds.; Springer: Cham, Switzerland, 2018; pp. 78–85. [Google Scholar] [CrossRef]
  84. Hasan, R.; Hasan, R. Pedestrian Safety Using the Internet of Things and Sensors: Issues, Challenges, and Open Problems. Future Gener. Comput. Syst. 2022, 134, 187–203. [Google Scholar] [CrossRef]
  85. Gómez, S. Technological Innovations in Improving the Safety of Cyclists in Urban Environments: A Comprehensive Analysis. Int. J. Comput. Appl. Technol. Res. 2024, 13, 13–18. [Google Scholar] [CrossRef]
  86. Santos, A.D.P.D.; Suzuki, A.H.G.; Medola, F.O.; Vaezipour, A. A Systematic Review of Wearable Devices for Orientation and Mobility of Adults with Visual Impairment and Blindness. IEEE Access 2021, 9, 162306–162324. [Google Scholar] [CrossRef]
  87. Rabieh, K.; Samir, R.; Azer, M.A. Empowering Pedestrian Safety: Unveiling a Lightweight Scheme for Improved Vehicle-Pedestrian Safety. Information 2024, 15, 160. [Google Scholar] [CrossRef]
  88. Zhang, X.; Li, J.; Zhou, J.; Zhang, S.; Wang, J.; Yuan, Y.; Liu, J.; Li, J. Vehicle-to-Everything Communication in Intelligent Connected Vehicles: A Survey and Taxonomy. Automot. Innov. 2025, 8, 13–45. [Google Scholar] [CrossRef]
Figure 1. PRISMA flowchart for the VRU systematic review.
Figure 1. PRISMA flowchart for the VRU systematic review.
Applsci 15 06945 g001
Figure 2. Comprehensive analytical workflow and tool integration process for wearable technology in VRU safety research.
Figure 2. Comprehensive analytical workflow and tool integration process for wearable technology in VRU safety research.
Applsci 15 06945 g002
Figure 3. Trends in scholarly publications on wearable technology for VRU traffic safety (2000–2025).
Figure 3. Trends in scholarly publications on wearable technology for VRU traffic safety (2000–2025).
Applsci 15 06945 g003
Figure 4. International collaboration network on wearable technologies for VRU safety.
Figure 4. International collaboration network on wearable technologies for VRU safety.
Applsci 15 06945 g004
Figure 5. Keyword co-occurrence network for wearable technology in VRU safety.
Figure 5. Keyword co-occurrence network for wearable technology in VRU safety.
Applsci 15 06945 g005
Figure 6. Top 14 keywords with the strongest burst patterns for wearable technology in VRU safety.
Figure 6. Top 14 keywords with the strongest burst patterns for wearable technology in VRU safety.
Applsci 15 06945 g006
Figure 7. Cluster view of knowledge domains for wearable technology in VRU safety.
Figure 7. Cluster view of knowledge domains for wearable technology in VRU safety.
Applsci 15 06945 g007
Figure 8. Timeline plot of 6 clusters of VRU wearable safety technology keywords.
Figure 8. Timeline plot of 6 clusters of VRU wearable safety technology keywords.
Applsci 15 06945 g008
Figure 9. Pedestrian safety wearable technology framework.
Figure 9. Pedestrian safety wearable technology framework.
Applsci 15 06945 g009
Figure 10. Two-wheeler user safety wearable technology framework.
Figure 10. Two-wheeler user safety wearable technology framework.
Applsci 15 06945 g010
Figure 11. Visually impaired pedestrians safety wearable technology framework.
Figure 11. Visually impaired pedestrians safety wearable technology framework.
Applsci 15 06945 g011
Table 1. Search strategy for wearable devices for VRU safety.
Table 1. Search strategy for wearable devices for VRU safety.
PICo ElementSearch Keywords
Population (P)TS = (“vulnerable road user*” OR pedestrian* OR cyclist* OR bicyclist* OR motorcyclist* OR scooter* OR (“visually impaired” OR blind*) OR (“hearing impaired” OR deaf*) OR (“mobility impair*” OR “wheelchair user*”))
Phenomenon of Interest (I)TS = (wearable* OR ((haptic* OR tactile* OR vibrotactile* OR vibration*) NEAR/3 (feedback OR cue* OR alert* OR warning*)) OR ((visual* OR light* OR LED* OR “ambient light”) NEAR/3 (feedback OR cue* OR alert* OR warning*)) OR ((audio* OR auditory* OR sound*) NEAR/3 (feedback OR cue* OR alert* OR warning*)) OR ((multimodal* OR multisensor* OR multisensory) NEAR/3 (feedback OR cue* OR alert* OR warning*)) OR ((smart* OR safe*) NEAR/3 (helmet* OR glasses OR cap* OR vest* OR jacket* OR watch* OR wristband* OR glove* OR shoe* OR insole* OR belt*)))
Context (Co)TS = (“road safety” OR “traffic safety” OR “intersection safety” OR (collision avoid* OR collision warn* OR crash avoid* OR crash prevent* OR “accident prevention”) OR (road hazard* OR traffic hazard*) OR (“street navigation” OR “road navigation” OR “urban navigation” OR “outdoor navigation” OR “traffic navigation” OR “route navigation” OR “bike navigation” OR “bicycle navigation” OR “pedestrian navigation” OR “crosswalk navigation” OR “intersection navigation” OR “street crossing”))
Table 2. Global research output and centrality in VRU safety wearable technology by countries and regions.
Table 2. Global research output and centrality in VRU safety wearable technology by countries and regions.
No.PublicationsCentralityYearCountry or Region
1160.322000USA
2110.082016CHINA
380.002013GERMANY
450.002017SOUTH KOREA
540.002020AUSTRIA
640.002016JAPAN
740.082015INDIA
830.002017FRANCE
920.002023AUSTRALIA
1020.002013MALAYSIA
Table 3. Top 15 journals and conferences in VRU safety wearable technology research.
Table 3. Top 15 journals and conferences in VRU safety wearable technology research.
No.CountCentralityYearPublication Venue
1170.132014LECT NOTES COMPUT SC
2130.272016ACCIDENT ANAL PREV
3120.042017SENSORS-BASEL
490.142020MOBILEHCI 2018
580.332007HUM FACTORS
680.072023IEEE ACCESS
780.022018PROC CVPR IEEE
870.002020ARXIV
960.162020APPL SCI-BASEL
1050.262016IEEE T SYST MAN CY C
1150.002016TRANSPORT RES F-TRAF
1250.002017IEEE INT VEH SYM
1340.362000FDN ORIENTATION MOBI
1440.322007ERGONOMICS
1540.022019INT J HUM-COMPUT ST
Table 4. The top 26 keywords with the highest frequency and centrality in the wearable technology for the VRU safety field.
Table 4. The top 26 keywords with the highest frequency and centrality in the wearable technology for the VRU safety field.
NumberCountCentralityYearKeywords
1240.062013assistive technology
2200.132002visually impaired
3140.112002wearable technology
4110.032002navigation system
5110.072019sensor
680.022019user experience
780.082016warning alerts
880.022017pedestrian safety
980.062016traffic safety
1070.252013collision avoidance
1170.302016performance
1270.112017obstacle avoidance
1360.192018vulnerable road users
1460.162018object detection
1560.162018accident prevention
1660.142016age differences
1760.212014electronic travel aids
1860.022013haptic feedback
1950.182017perception
2050.042016obstacle detection
2150.132018cyclist safety
2250.032017virtual reality
2350.022021augmented reality
2450.002020connected vehicle communication
2540.142016environment
2640.172017blind navigation
Table 5. Keyword cluster analysis results (LLR labeling method).
Table 5. Keyword cluster analysis results (LLR labeling method).
Cluster IDSizeSilhouetteMean (Year)Label (LLR)
0240.8952017mobility performance (2.61, 0.5); street crossing (2.61, 0.5); simulation (2.61, 0.5); cycling (2.61, 0.5); precise alarm (2.61, 0.5)
1240.8662017obstacle avoidance (8.96, 0.005); sensory substitution (4.44, 0.05); cyclist safety (3.25, 0.1); obstacle detection (2.97, 0.1); visually impaired (2.88, 0.1)
2160.742019human computer interaction (hci) (7.77, 0.01); virtual reality (7.77, 0.01); human-centered computing (7.77, 0.01); pedestrians (4.25, 0.05); child cyclists (3.86, 0.05)
3140.7962021cyclist safety (18.63, 0.0001); augmented reality (5.55, 0.05); head-up display prototype (4.54, 0.05); vehicle2x communication (4.54, 0.05); mixed reality simulation (4.54, 0.05)
4120.9742019blind navigation (5.46, 0.05); indoor/outdoor navigation (5.46, 0.05); deviation detection (5.46, 0.05); temporal alignment (5.46, 0.05); mono camera (5.46, 0.05)
560.9562020smart glasses (10.54, 0.005); legged locomotion (5.21, 0.05); walking cane (5.21, 0.05); zebra crossing (5.21, 0.05); artificial intelligence of the internet of things (aiot) (5.21, 0.05)
Table 6. Summary of wearable devices for pedestrians.
Table 6. Summary of wearable devices for pedestrians.
ReferenceYearWearable TechnologyExperimental Validation Method
Cœugnet et al. [30]2017Vibrotactile Wristband for safer street-crossing decisions through tactile warningsRecruited 57 participants of different age groups to perform street-crossing tests in a virtual traffic environment
Dommes et al. [31]2018Vibrotactile wristband, bone conduction earphones, and augmented reality visual feedback for navigation assistanceComparative navigation experiment with 58 participants of different age groups in a virtual urban environment
Baek et al. [33]2020Smart glasses receiving signals from Bluetooth beacons on heavy equipment or vehicles for proximity warningField experiments in mining sites with detection distance measurements across eight different angles, plus subjective workload assessment with 10 participants
Wang et al. [40]2020Smart wearable device with radar sensors and intelligent algorithm for predicting and alerting vehicle collision risksSimulation experiment across 1000 scenarios with different environmental conditions using BP neural network for accurate risk assessment
Tong et al. [36]2021Augmented reality glasses for visual warning of oncoming vehicles to pedestrians with obstructed viewsField experiments with simulated vehicle-pedestrian conflict scenarios, evaluating real-time projection algorithms and interface design
Stafford et al. [32]2021Cross-modal auditory cues with virtual reality headset for enhancing perception of gap affordancesVirtual road-crossing experiment with 39 older adults divided into three training groups, testing attunement to specifying vs. non-specifying information
Kim et al. [34]2021Smart helmet with LED warning system for alerting pedestrians of approaching mining vehiclesField experiments in underground mine measuring BLE signal detection distance at different transmission powers and angles, plus NASA-TLX workload assessment with 10 participants
Wu et al. [37]2023Augmented reality headset with visual and auditory warning systems for pedestrian street crossing safetyVirtual reality experiment with 24 participants in a 2 × 2, within-subjects design comparing modality matching between distraction and warning stimuli
Tran et al. [38]2023Simulated AR headset for pedestrian navigation and autonomous vehicle interactionVR-based simulation studies with 18 and 24 participants, respectively, evaluating different AR interface designs for urban navigation and street crossing scenarios
Ito et al. [35]2024Wearable devices (AirPods Pro) for delivering voice-based notifications to pedestriansField study with eight participants in urban environments collecting sensor data and acceptance feedback through the SoNotify smartphone application
Clérigo et al. [39]2024AR headset (Microsoft HoloLens 2) with collision warnings and virtual traffic light displays for street crossingField experiment with 20 participants in a real-world vehicle crossing scenario, measuring perceived safety, workload, and system usability
Table 7. Summary of wearable devices for two-wheeler users.
Table 7. Summary of wearable devices for two-wheeler users.
ReferenceYearWearable TechnologyExperimental Validation Method
Mohd Rasli et al. [54]2013Smart helmet with FSR sensor for motorcycle ignition safety and LED speed warning systemLab testing of FSR response and RF transmission; evaluated helmet detection and speed monitoring using BLDC fan with voltage measurements
Muthiah et al. [55]2015Smart helmet with accelerometer for headlamp direction control, sleep detection, and mandatory bucklingPrototype testing with defined accelerometer ranges for head movements; validated servo motor headlamp control
Hung et al. [51]2016Hazard warning helmet with ultrasonic sensors for elderly scooter riders during obstacle passingObservational study with five elderly participants across 50 double-parked vehicle scenarios comparing conventional and smart helmets
Matviienko et al. [41]2018Multimodal warning system for child cyclists with visual, auditory, and vibrotactile feedback on helmet and bicycleTwo controlled experiments with 24 children using bicycle simulator to test different feedback modalities across common collision scenarios
Bieshaar et al. [42]2018Smartphone-based system for cyclists using inertial sensors at various body locations to detect starting movementsReal-world evaluation with 49 participants and 84 starting movements; compared detection across wearing locations using machine learning
Bonilla et al. [43]2019Smart jersey with physiological sensors and arm gyroscope for cyclists’ health monitoring and automatic turn signalingField testing of alert transmission times between bicycle station, MQTT server, and mobile application with simulated alerts
Chang et al. [56]2019Intelligent motorcycle helmet with infrared transceivers, camera, and audio system for large vehicle detectionField testing with 10 motorcyclists collecting 600 images of approaching trucks/buses; tested day and night detection modes
von Sawitzky et al. [44]2020Helmet-integrated head-up display (HUD) with V2X communication warning cyclists of opening car doorsMixed reality study in CAVE simulator comparing baseline with visual and visual-auditory warnings
Vo et al. [45]2021Cycling helmet with four directional tactile actuators for collision avoidance feedbackField study with 10 cyclists on vehicle-free paths; measured accuracy in identifying direction and proximity of simulated vehicles through head tactile cues
Gupta et al. [52]2021Smart scooter system with pressure-sensing insoles for balance monitoring, pothole detection, and path trackingField testing with six riders on varied terrains; evaluated balance algorithm accuracy and pothole detection performance
von Sawitzky et al. [46]2022Helmet-mounted HUD with awareness messages for cyclists about opening car doorsMixed reality simulator study with 24 participants comparing three notification types to baseline
von Sawitzky et al. [47]2022Helmet-mounted HUD with multimodal awareness messages for cyclists avoiding door accidentsMixed reality simulator study with 24 participants comparing three notification types; evaluated usability, intuitiveness, user experience, and attitudes
Matviienko et al. [53]2022Using AR glasses with visual, auditory, and vibrotactile feedback modes to warn of approaching vehicles at intersectionsOutdoor experiment with 13 participants using real e-scooters in an AR simulation; compared three unimodal warnings to baseline (no warnings)
Mejia et al. [48]2023Smart safety vest for cyclists with ESP32, sensors, and LED panels for environmental-based alertsLab testing of illuminance sensor accuracy, LED visibility at various distances/angles, GPS tracking, and data logging to web server for route evaluation
von Sawitzky et al. [49]2024Smart glasses for cyclists with AR overlays, screen-fixed info, and ambient light displays for traffic hazardsFocus groups with three cycling HCI researchers and four UX design students; developed visual notification concepts and identified challenges in AR registration, distraction, and hazard prioritization
Ren et al. [50]2025Multimodal guidance system with AR displays and haptic vest (40 tactile units) for cyclist traffic awarenessTwo studies with 12 participants compared visual feedback types and evaluated unimodal vs. multimodal feedback
Table 8. Summary of orientation interfaces for visually impaired pedestrians (Part 1: 2000–2019).
Table 8. Summary of orientation interfaces for visually impaired pedestrians (Part 1: 2000–2019).
ReferenceYearInterface TechnologyExperimental Validation Method
Ross et al. [61]2000Wearable computer with three interfaces: virtual sound beacon, digitized speech, and tactile tapping systemField testing with 15 visually impaired subjects (aged 62–80) at three intersections; compared pre/post-baseline performance with each interface
Helal et al. [80]2001Wearable computer with voice I/O, differential GPS, electronic compass, and GIS databaseField testing with visually impaired users on campus; evaluated contextual information delivery, route optimization, and voice guidance through walkways
Ross et al. [79]2002Refined wearable orientation system with improved head/body referenced compass modesExtended evaluation with visually impaired older adults focusing on interface preferences and improvement suggestions; measured performance ratios compared to baseline
Wilson et al. [72]2007Wearable computer with non-speech audio, bone conduction headphones, and multisensor fusionLab and field testing with visually impaired and sighted users; evaluated audio beacon effectiveness and waypoint capture radius
Jameson et al. [68]2010Miniaturized chest-worn ultrasonic device with dual transducers for bilateration, activity detection, and minimalist tactile/audio feedbackLaboratory testing with wooden targets at various distances and angles; evaluated range accuracy using correlation-based detection and bandpass sampling
Villamizar et al. [69]2013Necklace-mounted sonar system with ultrasonic transducer, adjustable detection range, and tactile feedback via vibrationTesting with 10 visually impaired subjects in both artificial indoor environment with simulated obstacles and live outdoor settings
Adame et al. [62]2013Vibrotactile belt with multiple motors for information transfer and obstacle avoidanceTesting with five subjects for vibration perception evaluation; a total of 49 subjects in game simulation for reaction time and collision avoidance
Tapu et al. [70]2014Smartphone-based navigation assistant using computer vision with chest-mounted harnessComprehensive survey and comparative analysis of 12 wearable systems for visually impaired navigation
Mohamed Kassim et al. [57]2016Electronic spectacle with four ultrasonic sensors and dual-mode audio and vibrotactile feedbackPerformance evaluation with 20 participants to measure response time to vibration warnings; blind spot evaluation of sensor coverage
Kang et al. [81]2017Monocular camera-based system with deformable grid algorithm for obstacle detection and avoidanceObjective performance testing using RGB-D camera to establish ground truth; compared with five conventional methods across three scenarios
Cheng et al. [82]2017Wearable system with bone-conducting earphones and camera using adaptive extraction and consistency analysis algorithm for crosswalk detectionField testing with five blindfolded subjects; compared algorithm performance with bipolarity-based methods
Pundlik et al. [64]2018Wearable collision warning device with shoulder-mounted camera and vibrotactile wristbandsA total of 29 blindfolded and 8 blind subjects testing obstacle navigation, warning accuracy, and navigational decision-making
Cheng et al. [83]2018RGB-D camera wearable system for intersection navigation with three integrated detection modulesPerformance testing against previous algorithms; field trials at six intersections with blindfolded subject
Abobeah et al. [71]2018Wearable RGB camera-based navigation system with temporal alignment technique and pose estimation for path followingTesting across six indoor and six outdoor paths using chest-mounted camera and headphones
Petsiuk et al. [63]2019Low-cost open source ultrasound-sensing based wrist-worn navigational device with vibrotactile feedbackTesting with five blindfolded subjects across nine practical navigation scenarios, including indoor/outdoor obstacle detection and avoidance
Table 9. Summary of orientation interfaces for visually impaired pedestrians (Part 2: 2019–2025).
Table 9. Summary of orientation interfaces for visually impaired pedestrians (Part 2: 2019–2025).
ReferenceYearInterface TechnologyExperimental Validation Method
Kumar et al. [73]2019IoT-enabled wearable navigation system with Raspberry Pi, camera, ultrasonic sensors, GPS, and GSM modulesLab testing of neural network object identification and ultrasonic distance ranging; demonstrated GSM emergency notification capabilities
Khan et al. [74]2019Novel framework for outdoor mobility assistance using multisensor data fusion, advanced computer vision, and auditory displayProposed framework with experimental design for data collection from visually impaired users; planned evaluations using cross-validation and statistical measures
Son et al. [58]2020ODG R7 smart glasses with custom algorithms for crosswalk guidance using computer vision and verbal cuesTesting with three visually impaired subjects in both indoor (simulated) and outdoor crosswalk scenarios
Li et al. [75]2020Nvidia Jetson TX2 portable GPU and ZED camera, designed for integration with bone-conduction headsets for audio feedbackTesting with custom dataset of 3693 urban intersection images
Chang et al. [29]2021AI edge computing wearable with smart sunglasses, waist device, intelligent cane, and BT earphonesTesting with 1150 zebra crossing images; compared deep learning models with SSD_Inception_v2 showing best results
Bhattacharya et al. [65]2021Smartphone vision system with Raspberry Pi haptic feedback glove using finger vibration actuatorsTesting with 15 blindfolded subjects in three tasks; evaluated real-time navigation with pedestrians, traffic lights, and stop signs
Son & Weiland [59]2022Wearable RGB-D camera system with Jetson Xavier and bone conduction headphones using maps and SLAMTesting with three visually impaired subjects at crosswalks; evaluated localization, signal detection, guidance, and walking speed maintenance
Shen et al. [66]2022Chest-mounted camera system with tactile gloves using compressed YOLOV3 and Neural Compute Stick 2Testing with three blindfolded subjects across 45 trials in three obstacle scenarios (stationary/multiple/moving)
Meliones et al. [76]2022Micro-servo-motor ultrasonic obstacle detection system with median and Kalman filtering for smartphone-based outdoor blind navigationTesting with prototype device in various urban environments; analyzed detection performance with different obstacle types, including fixed, moving, and special case obstacles
Berdasco et al. [77]2023Millimeter-wave wearable antennas with synthetic aperture radar for electronic travel aid in medium and short distancesComparative testing of eight different wearable antenna designs using both FMCW and SFCW radar technologies
Asiedu Asante et al. [5]2023Chest-mounted stereo camera with grid-based obstacle selection and audio feedbackField testing in cluttered indoor scenes; evaluated ZED2 camera and Jetson Nano system with lightweight YOLOv5 model
Scalvini et al. [60]2024Helmet-mounted RGB-D camera with 3D spatialized audio system for blind navigation using visual-auditory substitutionField testing with blindfolded participants in three different urban environments
Joshi et al. [78]2024Handheld AI device with ultrasonic sensors, camera, and audio feedbackTesting with 47 participants in real-life scenarios; evaluated compact device with YOLOv3-based detection using DarkNet-53 backbone
Hung et al. [67]2024Belt-mounted system with camera vision and ultrasonic sensors with vibration feedbackProof-of-concept prototype demonstration on wearable belt using Linux-embedded computer with Python-based image processing
Ren et al. [8]2025Dual-modality feedback system with bHaptics Tactosy vest and head-mounted LED arrays for street-crossing guidanceComparative testing with 32 blindfolded participants across two scenarios (traffic signal-controlled and vehicle-based crossings)
Gao et al. [6]2025Glasses-based wearable obstacle avoidance device with multimodal sensing and cross-modal learningField testing with 12 visually impaired participants across multiple indoor and outdoor environments over seven months
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, G.; Huang, Z.; Huang, T.; Wang, G.; Lee, J.H. Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025). Appl. Sci. 2025, 15, 6945. https://doi.org/10.3390/app15126945

AMA Style

Ren G, Huang Z, Huang T, Wang G, Lee JH. Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025). Applied Sciences. 2025; 15(12):6945. https://doi.org/10.3390/app15126945

Chicago/Turabian Style

Ren, Gang, Zhihuang Huang, Tianyang Huang, Gang Wang, and Jee Hang Lee. 2025. "Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025)" Applied Sciences 15, no. 12: 6945. https://doi.org/10.3390/app15126945

APA Style

Ren, G., Huang, Z., Huang, T., Wang, G., & Lee, J. H. (2025). Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025). Applied Sciences, 15(12), 6945. https://doi.org/10.3390/app15126945

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop