Next Article in Journal
Layout Design Strategies for Scaling Down Semiconductor Systems Based on Current Flow Analysis in Interconnect
Previous Article in Journal
Pollen Food Allergy Syndrome in Southern European Adults: Patterns and Insights
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback

1
School of Design Arts, Xiamen University of Technology, Xiamen 361024, China
2
Department of Human-Centered AI, Sangmyung University, Seoul 03016, Republic of Korea
3
Institute for Advanced Intelligence Study, Daejeon 34189, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3942; https://doi.org/10.3390/app15073942
Submission received: 12 February 2025 / Revised: 30 March 2025 / Accepted: 31 March 2025 / Published: 3 April 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Safe street crossing poses significant challenges for visually impaired pedestrians, who must rely on non-visual cues to assess crossing safety. Conventional assistive technologies often fail to provide real-time, actionable information about oncoming traffic, making independent navigation difficult, particularly in uncontrolled or vehicle-based crossing scenarios. To address these challenges, we designed and evaluated two assistive systems utilizing haptic and visual feedback, tailored for traffic signal-controlled intersections and vehicle-based crossings. The results indicate that visual feedback significantly improved decision efficiency at signalized intersections, enabling users to make faster decisions, regardless of their confidence levels. However, in vehicle-based crossings, where real-time hazard assessment is crucial, haptic feedback proved more effective, enhancing decision efficiency by enabling quicker and more intuitive judgments about approaching vehicles. Moreover, users generally preferred haptic feedback in both scenarios, citing its comfort and intuitiveness. These findings highlight the distinct challenges posed by different street-crossing environments and confirm the value of multimodal feedback systems in supporting visually impaired pedestrians. Our study provides important design insights for developing effective assistive technologies that enhance pedestrian safety and independence across varied urban settings.

1. Introduction

Approximately 220 million people worldwide live with visual impairments [1], facing numerous challenges in daily life, with mobility safety being particularly critical [2,3]. For the general population, crossing the street is a simple task, but for individuals with visual impairments, it is fraught with risks and uncertainties [4,5]. Research shows that safe street-crossing decisions primarily depend on two key factors [6]: the time required for a pedestrian to cross the road and the available time before the next vehicle arrives. However, empirical evidence reveals concerning real-world conditions: drivers yield to pedestrians with white canes at a rate of only 37% [7], forcing visually impaired individuals to exercise extra caution in evaluating crossing opportunities. Moreover, the complexity of many intersections and the interference of street noise [8] further exacerbate the difficulties and safety risks faced by visually impaired pedestrians. Therefore, developing effective street-crossing assistive systems is of critical and urgent importance to improve the mobility safety of individuals with visual impairments.
With advancements in the Internet of Things (IoT) and vehicle-to-everything (V2X) technologies, smart infrastructure can now collect and communicate real-time traffic information, such as traffic light status and approaching vehicles. Wearable and handheld assistive devices for visually impaired pedestrians can leverage these data to provide timely feedback that enhances their awareness of crossing conditions. Existing assistive technologies, such as smartphone navigation [9,10,11,12], smart canes [13,14,15], and intelligent guide robots [16,17,18] have made significant progress in daily navigation. These technologies, developed through both laboratory research and field studies, have demonstrated considerable success in general mobility assistance and route guidance. However, these technologies still face challenges in conveying information effectively in street-crossing scenarios, particularly in communicating critical information such as the remaining time for pedestrian traffic lights [5,19,20] and safe crossing gaps [21]. Traditional auditory feedback faces notable limitations in street environments: environmental noise can mask critical auditory cues [22], and it may lead to auditory channel overload, impairing the ability of visually impaired individuals to perceive surrounding environmental sounds [23].
As promising alternatives, haptic and visual feedback provide visually impaired individuals with a clear and distinguishable means of receiving cues, even in noisy environments. The existing research has shown that haptic feedback can effectively avoid interference from environmental noise, enabling users to perceive vibration signals quickly and accurately [24,25,26]. At the same time, 90% of visually impaired individuals retain some degree of light perception [2,3,27], allowing them to detect the position, motion, and brightness changes of light sources [28]. This opens up significant opportunities for the development of visual cue systems based on residual light perception [27]. The widespread presence of this residual light perception capability represents an underutilized channel for assistive technology development, particularly in navigational contexts requiring rapid decision-making.
Despite these known advantages of both feedback modalities, a significant research gap exists in understanding their comparative effectiveness across different street-crossing contexts. While individual studies have explored haptic systems [29,30] and light-based cues [27] separately, few have systematically compared their performance or investigated how varying parameters within each modality affect decision-making. Research has shown different effectiveness patterns depending on the task and environment [27], highlighting the need for context-specific evaluations. To address this gap, we explored feedback effectiveness in two distinct scenarios: signal-controlled intersections where pedestrians follow traffic signals and vehicle crossing scenarios where they navigate based on approaching vehicles. Two assistive systems were designed and evaluated to determine the optimal feedback mode and parameters for each context. Our findings reveal distinct advantages in different scenarios, summarized as follows:
  • In the traffic light crossing scenario, visual feedback significantly improved users’ decision-making efficiency and facilitated faster decisions, regardless of their confidence levels.
  • In the vehicle crossing scenario, haptic feedback significantly enhanced decision efficiency, with dynamic haptic feedback outperforming dynamic visual feedback.
  • In both scenarios, the high-frequency visual feedback encouraged users to make more cautious decisions, while haptic feedback was widely preferred by users due to its comfortable experience.
Based on the findings above, we validated the application value of haptic and visual feedback in assisting visually impaired individuals with crossing decisions, providing critical insights for designing decision-support systems in traffic light-controlled and vehicle crossing scenarios. The study indicates that users can select their preferred feedback modality in traffic light scenarios based on personal preferences, while haptic feedback demonstrates significant advantages in vehicle crossing scenarios. These findings offer effective design strategies for improving the safety and convenience of daily mobility for visually impaired individuals.
The organization of this paper is as follows: Section 2 reviews related studies, focusing on haptic and visual feedback technologies for assisting visually impaired individuals. Section 3 details Experiment 1, which examined the effectiveness of haptic and visual feedback in a traffic light crossing scenario. Section 4 presents Experiment 2, exploring their utility in a vehicle crossing scenario. Section 5 discusses the results, analyzing the applicability and user preferences for different feedback modalities. Finally, Section 6 concludes the study and suggests future research directions.

2. Related Works

This section reviews previous research on supporting visually impaired people in complex tasks like crossing streets, obstacle avoidance, and navigation. We systematically examine the challenges in assisted mobility and existing solutions, investigate haptic feedback applications for critical information delivery, and discuss advances in visual feedback utilizing light perception. Both modalities aim to provide clear guidance for safe street-crossing decisions.

2.1. Challenges in Mobility for Visually Impaired Individuals

According to data from the World Health Organization, at least 2.2 billion people globally live with some form of vision impairment [1]. Among them, approximately 217 million people suffer from moderate to severe vision impairment, also referred to as “low vision”, and this number continues to grow [31]. Compared to individuals with low vision, those who are blind lack sufficient functional vision to support their daily activities [32,33,34]. However, it is noteworthy that a considerable proportion of legally blind individuals still retain some degree of usable vision, particularly light perception ability [27]. For example, in the United States, approximately 90% of blind individuals retain some level of light perception [2,3]. Although this degree of light perception is insufficient to perform complex visual tasks, it is often adequate for perceiving location, movement, and brightness variations in light sources [28]. Despite the retained light perception in some blind individuals, mobility and other daily activities that depend on vision remain significantly restricted for people with vision impairments across all age groups [30].

2.1.1. Street-Crossing Barriers for Visually Impaired Pedestrians

As a fundamental daily activity, crossing streets, a basic activity for sighted individuals, constitutes a significant challenge for visually impaired pedestrians [4,5]. They not only need to accurately identify and interpret the remaining time displayed on pedestrian traffic signals [5,19,20] but also must assess the availability of sufficiently large traffic gaps or rely on drivers to yield in order to cross safely [21]. This safety assessment primarily depends on two critical factors [6]: the time required for the pedestrian to cross the street and the available time gap before the next vehicle arrives. Studies have shown that, even when using a white cane, the rate of drivers yielding to blind pedestrians is only 37% [7], forcing them to rely more heavily on their judgment of traffic gaps. However, complex intersection designs and environmental noise on streets severely impact this judgment process [8]. For instance, in roundabouts and other segments without fixed crossing times, visually impaired pedestrians find it challenging to reliably assess drivers’ intentions to yield. In some cases, they even require intervention from others to avoid collisions [35]. These factors collectively result in longer decision-making times for visually impaired pedestrians, significantly affecting both their travel efficiency and safety.
To address the challenges faced by visually impaired individuals during mobility, researchers have developed a variety of assistive technological solutions. These solutions include smartphone-based navigation applications [9,10,11,12], smart canes [13,14,15], and intelligent guide robots [16,17,18]. These innovations provide essential functions such as obstacle detection, the interpretation of traffic patterns, and directional guidance. Despite their significant advancements in daily navigation, these technologies still encounter challenges in crossing scenarios, particularly in accurately conveying critical information such as the remaining time on pedestrian traffic signals [5,19,20] and safe traffic gaps [21].

2.1.2. Assistive Technology Solutions for Safe Crossing

To address these specific challenges, researchers have proposed various assistive systems to help visually impaired individuals cross streets safely. Among smartphone-based solutions, Ghilardi et al. developed a pedestrian traffic signal detection system capable of effectively recognizing various shapes and types of traffic signals [36], thereby providing crossing assistance to visually impaired users. Similarly, Montanha et al.’s system not only provides voice prompts for traffic signal and vehicle information but can also automatically adjust signal timing based on user needs, significantly aiding visually impaired individuals in crossing roads safely [37]. In the domain of wearable devices, Tian et al. designed a head-mounted device that delivers real-time audio feedback to convey information about crosswalk locations and traffic signal states. Experimental results have demonstrated the system’s effectiveness in providing safe crossing guidance for visually impaired individuals [4]. Li et al. introduced a wearable system capable of detecting the status of nearby traffic signals in real time and guiding users safely through intersections via voice alerts [38]. Additionally, Chen et al. developed an intelligent guide-dog harness system that identifies surrounding moving obstacles and traffic signal states, improving the safety of visually impaired pedestrians in complex traffic environments through voice reminders [39].
Subsequent research highlights the benefits of tactile notification methods, as audio feedback alone can be compromised by ambient noise and the cognitive demands of processing multiple auditory cues simultaneously [22,23]. Tactile cues, on the other hand, operate independently of hearing and demonstrate greater accuracy in complex traffic scenarios [25]. Comparative studies indicate that tactile feedback frequently surpasses auditory signals in terms of user preference and task efficiency [26,40]. Many visually impaired pedestrians also express a preference for tactile guidance, as it allows them to simultaneously monitor environmental audio cues [30]. Additionally, recent research has compared tactile and visual feedback for individuals with residual light perception, finding that visual cues provide superior performance in such cases [27].

2.2. Haptic Feedback for Visually Impaired Individuals

The human skin is densely populated with tactile receptors and nerve endings, which enable the effective capture of external stimuli and the transmission of signals to the central nervous system, forming a complete tactile experience [41]. Due to the immediacy and intuitiveness of tactile perception, vibration stimuli play a central role in tactile communication systems [42,43,44]. In practical applications, tactile feedback technology has been widely integrated into various wearable devices to provide users with diversified functional support, such as localization and navigation, motion control, and obstacle perception [45]. Research indicates that this technology can effectively guide users by simulating natural tactile cues, such as a “tap on the shoulder” [46]. For visually impaired individuals in particular, tactile feedback has become an important alternative sensory modality [24,47].

2.2.1. Haptic Perception Mechanisms and Assistive Applications

To fully leverage the advantages of tactile feedback in enhancing the safety of visually impaired individuals during mobility, researchers have designed and developed a variety of assistive devices. These devices can be broadly categorized into two groups: portable devices, including enhanced white canes [48,49] and handheld devices [50]; and wearable devices, which encompass belt-based systems [51,52,53,54], vest-based systems [55,56,57,58,59], head-mounted systems [60,61], earphone-based systems [62], footwear-based systems [63], and necklace-style systems [64]. In the field of portable navigation devices, Hertel et al. proposed a haptically enhanced white cane that provides visually impaired users with obstacle distance information via vibration feedback on the handle [49]. Experimental results demonstrated that users could recognize changes in obstacle positions with 90% accuracy [49]. To improve navigation precision, Liu et al. designed a handheld tactile navigation device featuring a rotatable tactile pointer to provide real-time directional feedback. Experiments showed that users were able to maintain their position within 30 cm of the central path for 92.6% of the navigation time [65]. Additionally, Leporini et al. developed a virtual white cane system that uses tactile feedback to simulate the sensation of a traditional cane touching obstacles [66]. Testing revealed that this system effectively helped blind individuals identify obstacles and door locations in indoor environments [66].
In the domain of wearable navigation devices, Van Erp et al. found that a vibrotactile belt could provide five distinguishable tactile cues in a laboratory environment [67]. However, in real walking scenarios with noise interference, the number of effectively recognizable cues significantly decreased to fewer than two [67]. To explore more effective tactile feedback solutions, Xu et al. proposed a wearable backpack-based navigation system that uses vibrations on the shoulder straps and waist to assist visually impaired individuals in recognizing and adjusting their direction, effectively supporting forward movement [29]. Lee et al. further developed a wrist-worn tactile display that innovatively combined static and dynamic tactile modes to deliver navigation information [68]. Experiments demonstrated that visually impaired users could accurately perceive the tactile feedback with a high recognition rate of 94.78% [68].

2.2.2. Haptic Feedback via Wearable Devices for Street-Crossing

Compared to portable devices, wearable devices have gained broader applications due to their unique advantages. Tactile feedback via wearable devices does not occupy the user’s hands, allowing visually impaired individuals to continue using a white cane for environmental exploration. Additionally, such devices can easily be integrated into everyday wearable items, providing users with a more natural and intuitive tactile feedback experience [29]. Despite significant advancements in obstacle detection and directional navigation, research into the application of tactile feedback for specific scenarios, such as street crossing for visually impaired individuals, remains insufficient. Notably, Adebiyi et al. found that, in scenarios like street crossing, where maintaining environmental auditory awareness is critical, visually impaired individuals subjectively preferred tactile feedback over voice prompts. This preference allows them to focus on perceiving surrounding sounds without interference [30]. In the exploration of assistive solutions for visually impaired individuals crossing streets, other forms of information cues are also worth investigating. For instance, Yang C et al. found that in navigation tasks for visually impaired users, visual feedback demonstrated higher efficiency, smoothness, and accuracy compared to tactile feedback [27]. Users also generally perceived visual feedback as more intuitive and user-friendly [27]. This finding opens up new directions for designing street-crossing assistance systems for visually impaired individuals.

2.3. Visual Feedback for Low-Vision and Blind Users

In the field of visual feedback for assisting visually impaired individuals, researchers primarily focus on providing informational support for those with low vision. These systems leverage the residual visual abilities of low-vision individuals and have achieved significant results in various application scenarios, such as obstacle recognition through visual enhancement technologies [28,69,70,71,72], path navigation guidance [73], gesture interaction [74], and stair-climbing assistance [75]. To enhance the usability of these systems, researchers have developed a range of visual optimization methods [76], such as image magnification, contrast adjustment, and edge enhancement, which are presented to users via head-mounted display devices or other visual interfaces.

2.3.1. Enhanced Visual Representation for Visually Impaired Assistance

Specific implementations include displaying enhanced obstacle images through head-mounted devices [69,71] or converting environmental depth information into high-contrast colors [77] and multi-level brightness variations [28,70]. Experimental evaluations demonstrate that these visual feedback methods have achieved positive results in information transmission. For instance, Huang et al. designed a sign-reading system that enhances the visual salience of indoor text (e.g., room numbers), enabling visually impaired users to accurately perceive and recognize this information [74]. In stair navigation research, researchers projected visually highlighted cues directly onto stair surfaces, and visually impaired participants not only perceived these visual feedback cues but also reported a stronger sense of safety [75]. Similarly, Zhao et al. developed a navigation system that used high-contrast visual markers to indicate directions, with validation results showing that visually impaired users could effectively perceive and interpret these navigation cues [73]. Katemake et al. explored the feasibility of LED light-based assistance for visually impaired individuals by installing LED strips along the edges of obstacles to enhance edge visibility. Their findings indicated that this approach helped visually impaired users better perceive and avoid obstacles [78]. These studies collectively demonstrate the feasibility of visual feedback as a medium for information transmission, showing that visually impaired individuals can accurately perceive and understand optimized visual cues.

2.3.2. Harnessing Light Perception for Visual Assistance

Although many visual assistive technologies require users to have functional residual vision, which makes it difficult for legally blind individuals to use these devices, studies have shown that approximately 90% of legally blind individuals still retain light perception abilities [2,3]. While such residual visual function is insufficient for performing fine visual tasks, it is sufficient for visually impaired individuals to perceive changes in light sources in their environment [28]. This basic light perception ability holds significant value in the daily activities of visually impaired individuals. For example, a study by Ross et al. found that visually impaired individuals can use environmental light sources (such as light at the end of a corridor) to assist with orientation and pathfinding [79]. Based on this finding, Yang C et al. developed a wearable navigation device using LED light strips. Experimental results demonstrated that this system significantly improved navigation efficiency (with the average completion time reduced by 28.5%) and accuracy (with an average deviation of less than 4°) [27]. In real-world environment testing, the collision rate of users employing this visual feedback device (0.25 collisions per 100 m) was significantly lower than the rate observed when using tactile feedback systems (1.58 collisions per 100 m) [27]. These findings provide important evidence for exploring new pathways in information acquisition for visually impaired individuals, indicating that well-designed light signal feedback systems can effectively leverage their light perception capability to provide environmental awareness support. Assistive technology applications for visually impaired individuals are summarized in Table 1.
Building upon the aforementioned research, this study aims to explore the comparative effectiveness of both haptic and visual feedback in street-crossing scenarios for visually impaired individuals. Specifically, we propose to design a light-based visual cue system [27] and a vibration-based tactile feedback device [29,67] to systematically evaluate the effectiveness of these two feedback modalities in conveying traffic signal states and surrounding vehicle activity information. By systematically comparing the distinctive characteristics of light and tactile perception in visually impaired individuals, we aim to determine which feedback method provides optimal support in complex traffic environments. The findings are expected to provide reliable theoretical foundations and practical guidance for developing safer and more intuitive assistive systems for visually impaired individuals.

3. Study 1: Haptic and Visual Feedback for Visually Impaired Pedestrians Crossing at Traffic Signals

This study investigates how well haptic and visual feedback support visually impaired pedestrians with residual light perception in crossing streets at traffic signal intersections. The feedback modes convey timing information to help participants make crossing decisions, allowing them to interpret the traffic light status through the feedback’s urgency levels. We systematically examine influencing factors of both haptic and visual feedback approaches in terms of interaction design, user evaluation, and experimental data analysis.

3.1. Experimental Design

Our study utilized the Pico 4 headset for system development and designed two independent feedback channels for comparison: haptic feedback was delivered through a bHaptics Tactosy vest equipped with 20 haptic units, while visual feedback was provided via two floodlight strips, each containing six lighting units. Through a detailed system implementation process and experimental setup, we conducted an in-depth evaluation of the effectiveness of these two feedback methods under different parameter configurations in assisting visually impaired individuals to determine the safe crossing timing at traffic lights.

3.1.1. Haptic and Visual Feedback Design in Assistive Systems for Visually Impaired Pedestrians

Haptic feedback serves as an important alternative mode for visually impaired individuals to perceive external information [24,47]. Research on vibration-based feedback [42,43,44] has demonstrated varying tactile sensitivity across different body regions [80]. While studies have explored haptic feedback on various body locations, including the waist [81], arm [82], and leg [83], the chest and abdomen regions have shown particularly promising results. Kim et al. discovered that these regions exhibit exceptional tactile recognition capabilities, with accuracy rates reaching 95% [84]. Park et al. further demonstrated that, by controlling the vibration frequency and contact area parameters, chest and abdomen-based feedback can effectively convey different levels of urgency perception [85]. Studies have also revealed that approximately 90% of legally blind individuals retain basic light perception abilities [2,3], allowing them to detect changes in light characteristics [27,28]. Yang C et al. confirmed that head-mounted LED arrays could effectively convey information through controlled light patterns [27].
Based on these findings, this study implements two parallel feedback schemes, as shown in Figure 1: a chest and abdomen vibration system that conveys urgency information through controlled frequency and contact area parameters, and a head-mounted LED array system that communicates the same information through adjustments in light frequency and illumination area. This dual-mode design aims to accommodate diverse user preferences and needs while ensuring consistent urgency perception across both feedback channels.

3.1.2. Experimental Settings

Our Study 1 utilized Pico 4 as the experimental platform. This device is equipped with two 2.56-inch fast LCD displays and features a proprietary high-precision four-camera environmental tracking system, combined with an infrared optical positioning system, enhancing tracking and positioning capabilities through optical sensors. The experimental application was developed using Unity 2021.3.27f1c2, along with the PICO Unity Integration SDK, version 2.5.0. The vibration haptic devices were operated via an application built in Unity 2019.4.40f1c1 using the bHaptics haptic plugin. This haptic control application was deployed on a Windows 11 desktop computer. The LED light strips were controlled via a Halo board using the MQTT (Message Queuing Telemetry Transport) protocol to publish commands, enabling synchronization between the vibration haptic devices and the LED light strips. An MQTT server, hosted on Tencent Cloud, was established using EMQX V4.0.4 running on a CentOS 7 server. Both the haptic and visual feedback control applications subscribed to the relevant pattern information and triggered the corresponding feedback modes upon receiving the data.

3.1.3. Independent Variables

Study 1 explored the impact of different feedback modes on street-crossing decision-making. The experiment included three independent variables, focusing on the effects of haptic and visual feedback. The specific independent variables are as follows:
  • Feedback modality (two levels): haptic feedback and visual feedback.
  • Feedback frequency (four levels): 0.5 Hz, 1.0 Hz, 2.0 Hz, and 4.0 Hz, as shown in Figure 2.
  • Feedback size (three levels): small, medium, and large, as shown in Figure 3.

3.1.4. Experimental Design

Both feedback modes employed a periodic stimulation pattern (Figure 2), with haptic and visual feedback using the same frequency pattern. Each frequency cycle was evenly distributed between stimulation (1) and pause (0) phases, continuing until the participant made a decision. Each mode featured 12 parameter combinations (4 frequencies × 3 sizes), totaling 24 experimental conditions. The experiment proceeded in two stages. In the pre-experiment phase, participants familiarized themselves with each feedback mode’s characteristics (24 trials). In the formal experiment phase (24 trials), we randomized the order of haptic and visual modes to minimize sequence effects. Within each mode, parameter combinations were presented with an increasing size, followed by an increasing frequency to aid participants’ recall in subsequent questionnaires and interviews. This design facilitated more accurate evaluations of perceived urgency and decision confidence under each condition. Each participant completed a total of 48 trials (24 haptic feedback; 24 visual feedback), with 24 trials in the pre-experiment phase and 24 in the formal experiment phase.

3.1.5. Participants and Procedure

We recruited 16 participants (9 males; 7 females) from a university campus, with an average age of 23.7 years ( s d = 1.06 ) . Among the participants, 11 had prior experience with haptic or visual assistive devices, while 5 had no previous exposure to such technologies. During the experiment, the participants closed their eyes to simulate visual impairment [6]. Hassan et al. demonstrated that both sighted individuals with closed eyes and visually impaired pedestrians exhibit similar levels of decision consistency in their street-crossing decisions when relying on non-visual senses [6]. While this simulation cannot perfectly replicate the experience of those with long-term visual impairments, it provides a valuable preliminary approach to studying the challenges faced by individuals with visual impairments and evaluating potential assistive technologies in controlled settings.
Using a Pico controller, participants pushed the joystick forward to indicate “cross” or pulled it backward to indicate “wait” and then pressed the trigger to proceed to the next trial. After completion, the participants filled out two questionnaires. The first measured urgency using a five-point scale [86]: 0 (not perceived), 1 (insignificant), 2 (low priority), 3 (high priority), and 4 (urgent). This choice aligns with research showing that users can distinguish at least four levels of urgency in vibration cues [86,87]. The second captured decision confidence, from 0% to 100%, following methods in prior human–machine interaction studies [88]. Together, these metrics offered a comprehensive view of how different feedback modes impacted user decision-making.

3.2. Results

In Study 1, we collected data on user decisions, decision time, and behavioral metrics, including head rotation angles and movement distances to evaluate performance. As the data were not normally distributed, we employed generalized estimating equations (GEEs) [89,90,91] to analyze our within-subjects experimental design with multiple factors (feedback modality, frequency, and size). GEE models are capable of handling non-normally distributed dependent variables while accounting for the correlated nature of repeated-measures data [90,91]. For binary user decisions (0 or 1), we specified binomial distribution with a Probit link function. For decision time and behavioral metrics, we used gamma distribution with a log link function. We also assessed users’ subjective perceptions through questionnaires. For urgency perception (which contained zero values), we applied negative binomial distribution with an identity link function, while, for decision confidence (range: 0–100; no zero values), we utilized gamma distribution with a log link function. All models employed an exchangeable correlation structure to account for within-subject correlations and included feedback modality, frequency, and size as main effects, along with their interactions. This comprehensive analysis provided a thorough understanding of how different feedback modalities influence user decision-making and behavior.

3.2.1. Cross Rate, Decision Time, and User Behavior Data in Study 1

Cross rate.
We recorded participants’ crossing decision under different feedback modes and parameter-combination conditions; “1” represented the decision to cross, and “0” represented the decision to wait. The main effects were found for frequency ( χ 2 ( 3 ) = 14.17 ,   p < 0.01 ) and size ( χ 2 ( 2 ) = 12.15 ,   p < 0.01 ). Additionally, we observed significant interaction effects between feedback and frequency ( χ 2 ( 3 ) = 9.44 ,   p < 0.05 ) and between feedback and size ( χ 2 ( 2 ) = 12.35 ,   p < 0.01 ). The mean cross rate across feedback modality, frequency, and Size is illustrated in Figure 4a. Post hoc Bonferroni pairwise comparisons showed that the cross rate of frequency 0.5 Hz (0.72) was significantly more than frequency 2.0 Hz (0.27) ( p < 0.05 ) and frequency 4.0 Hz (0.13) ( p < 0.01 ) ; frequency 1.0 Hz (0.55) was significantly more than frequency 2.0 Hz (0.27) ( p < 0.01 ) and frequency 4.0 Hz (0.13) ( p < 0.01 ) . The cross rate of the small size (0.54) was significantly more than that of the large size (0.26) ( p < 0.01 ) ; the medium size (0.40) was significantly more than the large size (0.26) ( p < 0.01 ) .
Decision time.
We recorded participants’ decision time under different feedback modes and parameter combination conditions, defined as the time interval from receiving the feedback signal to making the final crossing decision. The main effects were found for feedback ( χ 2 ( 1 ) = 6.71 ,   p < 0.05 ), frequency ( χ 2 ( 3 ) = 40.47 ,   p < 0.001 ), and size ( χ 2 ( 2 ) = 27.35 ,   p < 0.001 ). Additionally, we observed significant interaction effects between feedback and frequency ( χ 2 ( 3 ) = 11.27 ,   p < 0.05 ) and between feedback and size ( χ 2 ( 2 ) = 9.70 ,   p < 0.01 ). The mean decision time across feedback modality, frequency, and size is illustrated in Figure 4b. Post hoc Bonferroni pairwise comparisons showed that the decision time of the haptic feedback (4.46 s) was significantly longer than that of the visual feedback (3.66 s) ( p < 0.05 ) . The decision time of frequency 4.0 Hz (3.49 s) was significantly faster than that of frequency 0.5 Hz (5.05 s) ( p < 0.01 ) and frequency 1.0 Hz (4.11 s) ( p < 0.05 ) ; frequency 2.0 Hz (3.67 s) was significantly faster than frequency 0.5 Hz (5.05 s) ( p < 0.01 ) and frequency 1.0 Hz (4.11 s) ( p < 0.01 ) ; frequency 1.0 Hz (4.11 s) was significantly faster than frequency 0.5 Hz (5.05 s) ( p < 0.01 ) . The decision time of the large size (3.61 s) was significantly faster than that of the small size (4.36 s) ( p < 0.5 ) and the medium size (4.18 s) ( p < 0.01 ) .
Head rotation angle.
We recorded the total head rotation angle during the decision time. The main effect was found for feedback ( χ 2 ( 1 ) = 16.64 ,   p < 0.001 ). The mean head rotation angle across feedback modalities, frequencies, and sizes is illustrated in Figure 5a. Post hoc Bonferroni pairwise comparisons showed that the head rotation angle of the haptic feedback (8.04 degrees) was significantly longer than that of the visual feedback (5.27 degrees) ( p < 0.01 ) .
Head movement distance.
We recorded the total head movement distance during the decision time. The main effect was found for feedback ( χ 2 ( 1 ) = 27.78 ,   p < 0.001 ), frequency ( χ 2 ( 3 ) = 24.55 ,   p < 0.001 ), and size ( χ 2 ( 2 ) = 11.06 ,   p < 0.01 ). Additionally, we observed significant interaction effects between feedback and size ( χ 2 ( 2 ) = 7.48 ,   p < 0.05 ). The mean total head movement distance across feedback modalities, frequency, and size is illustrated in Figure 5b. Post hoc Bonferroni pairwise comparisons showed that the total head-movement distance of the haptic feedback (3.81 cm) was significantly longer than that of the visual feedback (1.23 cm) ( p < 0.01 ) .
We also recorded the head movement distance along X, Y, and Z dimensions. For the X dimension, the main effects were found for feedback ( χ 2 ( 1 ) = 11.09 ,   p < 0.001 ). Post hoc Bonferroni pairwise comparisons showed that the distance of haptic feedback (4.66 cm) was significantly longer than that of visual feedback (2.99 cm) ( p < 0.05 ) . For the Y dimension, the main effects were found for feedback ( χ 2 ( 1 ) = 15.86 ,   p < 0.001 ) and frequency ( χ 2 ( 3 ) = 17.90 , p < 0.001 ). Post hoc Bonferroni pairwise comparisons showed that the distance of haptic feedback (2.30 cm) was significantly longer than that of visual feedback (1.18 cm) ( p < 0.01 ) . The distance of frequency 0.5 Hz (2.34 cm) was significantly longer than that of frequency 1.0 Hz (1.53 cm) ( p < 0.05 ) and frequency 4.0 Hz (1.32 cm) ( p < 0.05 ) . For the Z dimension, the main effects were found for feedback ( χ 2 ( 1 ) = 16.05 ,   p < 0.001 ). Post hoc Bonferroni pairwise comparisons showed that the distance of haptic feedback (4.40 cm) was significantly longer than that of visual feedback (3.13 cm) ( p < 0.01 ) .

3.2.2. User Perception of Feedback Patterns

After the test, the participants completed a questionnaire measuring both urgency and confidence. GEEs revealed significant effects of frequency ( χ 2 ( 3 ) = 248.49 ,   p < 0.01 ) and size ( χ 2 ( 2 ) = 87.48 ,   p < 0.01 ) on urgency (Figure 6). Post hoc Bonferroni pairwise comparisons showed that the urgency of frequency 4.0 Hz (3.61) was significantly more than that of frequency 2.0 Hz (3.04) ( p < 0.01 ) , frequency 1.0 Hz (2.41) ( p < 0.01 ) , and frequency 0.5 Hz (1.84) ( p < 0.01 ) ; that of frequency 2.0 Hz (3.07) was significantly more than that of frequency 1.0 Hz (2.41) ( p < 0.01 ) and frequency 0.5 Hz (1.84) ( p < 0.01 ) , and that of frequency 1.0 Hz (2.41) was significantly more than that of frequency 0.5 Hz (1.84) ( p < 0.01 ) . The urgency of the large size (3.34) was significantly more than that of the medium size (2.77) ( p < 0.01 ) and the small size (2.07) ( p < 0.01 ) ; that of the medium size (2.77) was significantly more than that of the size small (2.07) ( p < 0.01 ) .
For confidence ratings, GEEs revealed significant effects of frequency ( χ 2 ( 3 ) = 35.38 , p < 0.001 ) and size ( χ 2 ( 2 ) = 10.77 ,   p < 0.01 ) on confidence (Figure 7). Post hoc Bonferroni pairwise comparisons showed that the confidence of frequency 4.0 Hz (92.87%) was significantly more than that of frequency 2.0 Hz (85.73%) ( p < 0.01 ) , frequency 1.0 Hz (79.77%) ( p < 0.01 ) , and frequency 0.5 Hz (79.64%) ( p < 0.01 ) ; that of frequency 2.0 Hz (85.73%) was significantly more than that of frequency 1.0 Hz (79.77%) ( p < 0.05 ) . The confidence of the large size (87.77%) was significantly more than that of the medium size (81.43%) ( p < 0.05 ) .
Interview results show that over 62% of respondents (10 for haptic feedback and 6 for visual feedback) explicitly expressed a preference for the haptic feedback system, stating that its dynamic vibration frequency changes conveyed the urgency of situations more intuitively. This immediate physical perception helped them better understand and assess the level of danger in the current context. As one participant noted, “When the vibration frequency is very high, I can clearly feel that ‘danger is approaching’, which gives me more confidence in judging the level of urgency and making quick decisions”. However, some participants pointed out that an excessive vibration amplitude or overly high frequencies could cause discomfort or fatigue, indicating the need to strike a balance between alertness and user comfort. In contrast, participants reported that visual feedback conveyed a stronger sense of urgency compared to haptic feedback, especially the intense urgency evoked via high-frequency flashing, which made them more cautious during their decision-making process. As the flashing frequency increased, participants’ confidence in their decisions also improved. They generally believed that flashing frequency was the key factor in conveying urgency: faster flashes effectively induced a sense of time pressure, while lower frequencies caused some users to shift their attention to perceiving changes in size. However, the combination of high-frequency flashing and large size also involved limitations, as it could lead to perceptual fatigue and confusion in judgment. As one user observed, “When the light flashes intensely, it’s hard to tell whether the change comes from the frequency or the size. I feel overwhelmed by the flashing and have to make an immediate decision”.
The user feedback highlights a trade-off between the intuitive nature of haptic feedback and the stronger urgency signals from visual feedback, suggesting that an optimal system should balance perceptual effectiveness with user comfort to avoid cognitive overload during critical decision-making moments. A few participants further suggested incorporating directional motion cues, such as left-to-right vibration sequences or light flows, to better perceive the movement of oncoming vehicles. They noted that, while frequency effectively conveyed urgency, integrating directional information could enhance their ability to assess approaching hazards in more dynamic environments. These insights highlight the importance of frequency as a core parameter for both haptic and visual feedback, while size bolsters users’ confidence and situational awareness. Building on these findings, we extended our investigation beyond controlled traffic light crossings to more complex, vehicle-based crossing scenarios. Study 2 explores how integrating directional cues alongside frequency and size can further support visually impaired pedestrians in making safer crossing decisions when navigating roads with moving vehicles.

4. Study 2: Haptic and Visual Feedback for Assisting Visually Impaired Pedestrians in Vehicle-Based Crossing Decisions

Unlike traffic light-controlled intersections, where pedestrian signals regulate vehicle movement and create predictable stop-and-go patterns, vehicle-based crossings present greater uncertainty for visually impaired pedestrians. In scenarios such as mid-block crossings or uncontrolled intersections, the absence of explicit stop signals increases cognitive load, requiring pedestrians to judge the speed, direction, and intent of approaching vehicles in real time. These conditions demand more dynamic situational awareness, making it essential to explore how additional sensory cues—such as directional motion feedback—can support safer crossing decisions.
Extending the experimental setup and feedback modalities from Study 1, Study 2 focused on the vehicle crossing scenario. While retaining the same frequency settings, we refined the size variable and incorporated the feedback state and direction as additional cue dimensions to enhance visually impaired users’ ability to judge safe crossing times. We assessed these two modes through interaction design, user evaluation, and experimental data to clarify how haptic or visual feedback might facilitate crossing decisions.

4.1. Experimental Design

Prior work indicates that haptic information transmission can be optimized through parameter adjustments including speed, direction, intensity, and positioning [92]. Dynamic haptic signals have shown multiple advantages over static feedback: they effectively guide attention and convey hazard information while improving processing speed and accuracy [93,94], provide more natural and intuitive interactions [95,96], and enable the robust recognition of continuous motion and direction [97]. For visual feedback, research confirms that many visually impaired individuals retain sufficient light perception capabilities to discern positional and brightness changes [27,28].
Building on these findings, Study 2 maintained the same feedback implementation as Study 1 (as shown in Figure 8) while introducing additional parameters to examine both static and dynamic patterns, as well as directional cues for indicating the vehicle approach. This enhanced design allows for a systematic comparison of how different signal patterns aid safe crossing decisions across both feedback channels.

4.1.1. Independent Variables

In Study 2, based on the evaluation results from Study 1, we adopted and adjusted the settings to optimize the user experience. In the experiment, haptic feedback was compared with visual feedback. The independent variables included the following:
  • Feedback modality (two levels): haptic feedback and visual feedback.
  • Feedback direction (two levels): left and right.
  • Feedback state (two levels): dynamic and static, as shown in Figure 9.
  • Feedback frequency (four levels): 0.5 Hz, 1.0 Hz, 2.0 Hz, and 4.0 Hz, as shown in Figure 2.
  • Feedback size (two levels): small and medium.

4.1.2. Experimental Design

Study 2 employed both haptic and visual feedback modes, maintaining the same periodic stimulation pattern as in Study 1 (see Figure 2), with each mode containing 32 parameter combinations (2 directions × 2 states × 4 frequencies × 2 sizes). The experiment was divided into two stages: a pre-experiment phase and a formal experiment phase. Each phase consisted of 64 trials (32 haptic feedback; 32 visual feedback), resulting in a total of 128 trials per participant. The pre-experiment phase aimed to familiarize participants with the basic characteristics of the feedback modes. In the formal experiment phase, the presentation order of the variables feedback mode, state, and direction was randomized to avoid sequence effects. Meanwhile, within each feedback mode, the parameter combinations were presented in a fixed order (with the size gradually increasing first, followed by frequency gradually increasing). This design helped participants more accurately recall and evaluate their perceived urgency and decision confidence under each condition during subsequent questionnaire assessments and interviews.

4.1.3. Participants and Procedure

Study 2 recruited 16 participants (9 male; 7 female) from a university campus, with an average age of 23.7 years ( s d = 1.13 ) . Among the participants, nine had prior experience with haptic or visual assistive devices, while seven had no previous exposure to such technologies. As in Study 1, the participants were required to simulate a visually impaired state by closing their eyes and making crossing decisions based on the feedback received. After the experiment, the participants also completed two evaluation questionnaires concerning urgency and decision confidence, using the same assessment methods as in Study 1.

4.2. Results

In Study 2, we collected the same types of data as in Study 1, including user decisions, decision time, behavioral metrics, and subjective perceptions. We employed the same GEE method for analysis, using consistent distribution and link function specifications for each collected data type, as in Study 1. All models utilized an exchangeable correlation structure to account for within-subject correlations and included feedback modality, direction, state, frequency, and size as main effects, along with their interactions. This comprehensive analysis enabled us to gain deep insights into how different feedback modalities influence users’ decision-making behaviors and subjective experiences.

4.2.1. Cross Rate, Decision Time, and User Behavior Data in Study

Cross rate.
Main effects were found for frequency ( χ 2 ( 3 ) = 49.27 ,   p < 0.001 ), size ( χ 2 ( 1 ) = 52.41 , p < 0.001 ), state ( χ 2 ( 1 ) = 3.88 ,   p < 0.05 ). Additionally, we observed significant interaction effects between feedback and frequency ( χ 2 ( 3 ) = 24.88 ,   p < 0.001 ) and feedback and size ( χ 2 ( 1 ) = 4.64 ,   p < 0.05 ). The mean cross rate across feedback modalities, frequencies, sizes, directions, and states is illustrated in Figure 10a. Post hoc Bonferroni pairwise comparisons showed that the cross rate of frequency 0.5 Hz (0.76) was significantly more than that of frequency 2.0 Hz (0.15) ( p < 0.01 ) and frequency 4.0 Hz (0.09) ( p < 0.01 ) ; that of frequency 1.0 Hz (0.60) was significantly more than that of frequency 2.0 Hz (0.15) ( p < 0.01 ) and frequency 4.0 Hz (0.09) ( p < 0.01 ) . The cross rate of the small size (0.51) was significantly more than that of the medium size (0.24) ( p < 0.01 ) .
Decision time.
Main effects were found for feedback ( χ 2 ( 1 ) = 12.81 ,   p < 0.001 ), frequency ( χ 2 ( 3 ) = 51.94 , p < 0.001 ), size ( χ 2 ( 1 ) = 11.07 ,   p < 0.001 ), and state ( χ 2 ( 1 ) = 33.28 , p < 0.001 ). Additionally, we observed significant interaction effects between feedback and size ( χ 2 ( 1 ) = 33.28 , p < 0.05 ) and between feedback and state ( χ 2 ( 1 ) = 63.03 ,   p < 0.001 ). The mean decision time across feedback modality, frequency, size, direction, and state is illustrated in Figure 10b. Post hoc Bonferroni pairwise comparisons showed that the decision time of the visual feedback (3.78 s) was significantly longer than that of the haptic feedback (3.07 s) ( p < 0.01 ) . The decision time of frequency 4.0 Hz (2.85 s) was significantly faster than that of frequency 0.5 Hz (4.10 s) ( p < 0.01 ) , frequency 1.0 Hz (3.63 s) ( p < 0.01 ) , and frequency 2.0 Hz (3.17 s) ( p < 0.01 ) ; that of frequency 2.0 Hz (3.17 s) was significantly faster than that of frequency 0.5 Hz (4.10 s) ( p < 0.01 ) and frequency 1.0 Hz (3.63 s) ( p < 0.01 ) ; that of frequency 1.0 Hz (3.63 s) was significantly faster than that of frequency 0.5 Hz (4.10 s) ( p < 0.05 ) . The decision time of the medium size (3.28 s) was significantly faster than that of the small size (3.53 s) ( p < 0.01 ) . The decision time of the state dynamic (3.73 s) was significantly longer than the static state (3.11 s) ( p < 0.01 ) .
Head rotation angle.
We recorded the total head rotation angle during the decision time. The main effect was found for frequency ( χ 2 ( 3 ) = 56.30 ,   p < 0.001 ) and state ( χ 2 ( 1 ) = 7.94 ,   p < 0.01 ). No other main or interaction effect was found. The mean decision across feedback modalities, frequencies, sizes, directions, and states is illustrated in Figure 11a. Post hoc Bonferroni pairwise comparisons showed that the total head rotation angle of frequency 0.5 Hz (8.07 degree) was significantly longer than that of frequency 1.0 Hz (5.51 degree) ( p < 0.05 ) , frequency 2.0 Hz (5.13 degree) ( p < 0.05 ) , and frequency 4.0 Hz (4.64 degree) ( p < 0.01 ) ; that of frequency 1.0 Hz (5.51 degree) was significantly longer than that of frequency 4.0 Hz (4.64 degree) ( p < 0.05 ) . The head rotation angle of the dynamic state (6.38 degree) was significantly longer than that of the static state (5.11 degree) ( p < 0.05 ) .
Head movement distance.
We recorded the total head movement distance during the decision time. The main effect was found for feedback ( χ 2 ( 1 ) = 4.05 ,   p < 0.05 ), frequency ( χ 2 ( 3 ) = 344.99 , p < 0.01 ), and size ( χ 2 ( 1 ) = 27.22 ,   p < 0.001 ). The mean decision across feedback modalities, frequencies, sizes, directions, and states is illustrated in Figure 11b. Post hoc Bonferroni pairwise comparisons showed that the total head movement distance of the small size (2.80 cm) was significantly longer than that of the medium size (1.56 cm) ( p < 0.01 ) .
We also recorded the head movement distance along the X, Y, and Z dimensions. For the X dimension, the main effects were found for frequency ( χ 2 ( 3 ) = 62.92 ,   p < 0.001 ) and state ( χ 2 ( 1 ) = 8.57 ,   p < 0.01 ). Post hoc Bonferroni pairwise comparisons showed that the distance of frequency 0.5 Hz (4.72 cm) was significantly longer than that of frequency 1.0 Hz (3.16 cm) ( p < 0.05 ) and frequency 4.0 Hz (2.73 cm) ( p < 0.05 ) . The distance of the dynamic state (3.89 cm) was significantly longer than that of the static state (2.93 cm) ( p < 0.05 ) . For the Y dimension, the main effects were found for frequency ( χ 2 ( 3 ) = 51.14 ,   p < 0.001 ), size ( χ 2 ( 1 ) = 7.36 ,   p < 0.01 ), and state ( χ 2 ( 1 ) = 16.10 ,   p < 0.001 ). Post hoc Bonferroni pairwise comparisons showed that the distance of frequency 0.5 Hz (1.97 cm) was significantly longer than that of frequency 1.0 Hz (1.31 cm) ( p < 0.05 ) , frequency 2.0 Hz (1.10 cm) ( p < 0.01 ) , and frequency 4.0 Hz (1.03 cm) ( p < 0.01 ) ; that of frequency 1.0 Hz (1.31 cm) was significantly longer than that of frequency 2.0 Hz (1.10 cm) ( p < 0.05 ) and frequency 4.0 Hz (1.03 cm) ( p < 0.05 ) . The distance of the small size (1.42 cm) was significantly longer than that of the medium size (1.20 cm) ( p < 0.05 ) . The distance of the dynamic state (1.46 cm) was significantly longer than that of the static state (1.17 cm) ( p < 0.01 ) . For the Z dimension, the main effects were found for frequency ( χ 2 ( 3 ) = 59.36 ,   p < 0.001 ). Post hoc Bonferroni pairwise comparisons showed that the distance of frequency 0.5 Hz (4.74 cm) was significantly longer than that of frequency 1.0 Hz (3.24 cm) ( p < 0.05 ) , frequency 2.0 Hz (2.82 cm) ( p < 0.01 ) , and frequency 4.0 Hz (2.71 cm) ( p < 0.01 ) .

4.2.2. User Perception of Feedback Patterns

After the test, participants completed a questionnaire measuring both urgency and confidence. GEE revealed significant effects of frequency ( χ 2 ( 3 ) = 506.53 ,   p < 0.01 ), size ( χ 2 ( 1 ) = 116.42 ,   p < 0.01 ), and state ( χ 2 ( 1 ) = 5.31 ,   p < 0.05 ). No other main or interaction effect was found. The mean urgency across feedback modalities, frequencies, sizes, directions, and states is illustrated in Figure 12. Post hoc Bonferroni pairwise comparisons showed that, for urgency, that of frequency 4.0 Hz (3.81) was significantly more than that of frequency 0.5 Hz (1.73) ( p < 0.01 ) , frequency 1.0 Hz (2.38) ( p < 0.01 ) , and frequency 2.0 Hz (3.16) ( p < 0.01 ) ; that of frequency 2.0 Hz (3.16) was significantly more than that of frequency 0.5 Hz (1.73) ( p < 0.01 ) and frequency 1.0 Hz (2.38) ( p < 0.01 ) , while that of frequency 1.0 Hz (2.38) was significantly more than that of frequency 0.5 Hz (1.73) ( p < 0.01 ) . The urgency of the medium size (3.18) was significantly more than that of the small size (2.37) ( p < 0.01 ) . The urgency of the dynamic state (2.88) was significantly more than that of the static state (2.67) ( p < 0.05 ) .
For confidence ratings, EGG revealed significant effects of frequency ( χ 2 ( 3 ) = 57.86 , p < 0.001 ). No other main or interaction effect was found. The mean confidence across feedback modalities, frequencies, sizes, directions, and states is illustrated in Figure 13. Post hoc Bonferroni pairwise comparisons showed that, for confidence, that of frequency 4.0 Hz (95.74%) was significantly more than that of frequency 0.5 Hz (84.14%) ( p < 0.01 ) , frequency 1.0 Hz (80.35%) ( p < 0.01 ) and frequency 2.0 Hz (83.82%) ( p < 0.01 ) .
In post-experiment interviews, over 81% of respondents (13 for haptic feedback; 3 for visual feedback) explicitly expressed a preference for the haptic feedback system, particularly favoring its dynamic tactile presentation (11 for the dynamic condition; 5 for the static condition). They felt that dynamic haptic feedback helped them perceive the vehicle’s trajectory more intuitively and accurately. As one participant noted, “The dynamic Haptic Feedback allowed me to clearly feel the process of the vehicle crossing by the side of my torso, which significantly boosted my confidence in making decisions”. Additionally, users unanimously agreed that changes in feedback frequency were crucial for perceiving the distance and urgency of approaching vehicles. They stated that the gradual increase in vibration or light frequency effectively conveyed a sense of “urgency” as the vehicle approached, while changes in the feedback area helped them judge the vehicle’s distance. The combination of these two dimensions allowed them to quickly assess danger and make cautious decisions. Most participants pointed out that high-frequency vibrations enabled them to recognize imminent danger more immediately than high-frequency visual cues, giving them greater confidence in their decision-making. Regarding the feedback area, one participant commented as follows: “A smaller vibration area allowed me to perceive the cues more precisely; combined with frequency changes, I felt more confident in my decision-making”. Another user stated, “Frequency changes within a small area allowed me to quickly understand the cues while consuming fewer cognitive resources”.
The user feedback revealed a strong user preference for haptic feedback systems that combine dynamic presentation with frequency modulation, suggesting that tactile cues may offer cognitive advantages for urgent safety decisions by providing spatially intuitive information while imposing a lower cognitive load compared to visual alternatives. However, some participants suggested that, in specific scenarios, static visual cues might offer unique advantages, particularly when multiple vehicles needed to be perceived; changes in light areas could intuitively convey information about traffic density. Participants also offered suggestions for enhancing the feedback modes. Some proposed using different frequencies or sizes to represent specific vehicle information, such as vehicle size, type, and speed, to better assist blind individuals in perceiving vehicular details and making more informed crossing decisions. As one participant stated, “Using different sizes to indicate vehicle sizes, and including non-motorized vehicles in the perceptual information, could ensure safety during daily travel”. These suggestions highlight the users’ need for multi-dimensional traffic information acquisition.

5. Discussion

We explored the effectiveness of haptic and visual feedback in assisting visually impaired individuals with crossing decisions. By systematically adjusting different feedback parameter combinations, we compared the performance of the two feedback methods in conveying critical information such as traffic light status and vehicle movement. The study found that, in the traffic light scenario (Study 1), visual feedback significantly reduced users’ decision-making time, whereas, in the vehicle crossing scenario (Study 2), haptic feedback demonstrated a more pronounced advantage. This section discusses the theoretical significance of these findings based on the existing literature, provides an in-depth analysis of the study’s limitations, and proposes future research directions and practical recommendations.

5.1. Effects of Feedback Urgency on Decision Time and Crossing Outcomes

In Study 1, focusing on the traffic light crossing scenario, visual feedback led to significantly shorter decision times than haptic feedback. Under different frequency conditions, participants interpreted visual feedback as conveying higher urgency, thereby accelerating their reaction times (see Figure 14a) and clarifying crossing intentions (Figure 14b). These findings suggest that visual feedback could potentially promote timely and cautious decisions for individuals with residual light perception. However, from a user-experience perspective, most participants still preferred haptic feedback. While visual feedback showed advantages in decision-making speed, prior research has identified potential limitations, including perceptual fatigue and possible confusion with ambient lighting during extended use [27]. As a result, haptic feedback offered a more comfortable experience, whereas visual feedback demonstrated better performance in rapid decision-making for traffic signal scenarios.
Further analysis showed that the modality difference remained significant across different feedback area conditions. Visual feedback consistently led to faster decision times (see Figure 15a). However, participants’ crossing intentions were less explicit with visual feedback compared to haptic feedback (see Figure 15b). This suggests that, while visual feedback enhances decision speed, haptic feedback may better support clear action intentions, highlighting the complementary strengths of both modalities for different scenarios or preferences.
In Study 2, which addressed vehicle crossing, haptic feedback yielded two key benefits. First, it effectively reduced decision-making time; second, its ability to convey vehicle proximity in a tangible manner earned widespread user approval. Notably, when frequency-related decisions were considered (see Figure 16a), visual feedback maintained its clarity advantage observed in Study 1. However, area-based decisions (Figure 16b) revealed stronger performance for haptic feedback, as participants more readily sensed urgency through changes in vibration area [85].
Moreover, the advantages of Haptic Feedback extended beyond decision quality and were also evident in decision speed under various feedback parameters. The data showed that haptic feedback consistently achieved shorter decision times compared to visual feedback, both under different feedback area parameters (see Figure 17a) and feedback state parameters (see Figure 17b). Particularly under dynamic feedback parameters, the speed advantage of haptic feedback was especially pronounced, emphasizing its effectiveness in scenarios requiring rapid responses to moving vehicles.
The parameter analysis results identified several feedback combinations that performed well in different scenarios. In Study 1 (see Table 2), the combinations of small (1.0 Hz) and large (1.0 Hz) for haptic feedback, and medium (0.5 Hz) and small (4.0 Hz) for visual feedback, effectively distinguished between “cross” and “wait” decision scenarios. In Study 2 (see Table 3), directional haptic feedback utilized combinations of small (0.5 Hz) and medium (2.0 Hz) in both dynamic and static conditions, while Visual Feedback consistently used small (0.5 Hz) and small (4.0 Hz) combinations in both states. These parameter combinations ensured both the timeliness of decisions and an appropriate level of alertness while avoiding discomfort caused by excessive urgency. Beyond technical parameters, we also examined subjective user preferences, which revealed distinct patterns across different crossing scenarios: 62% of participants preferred visual feedback in the traffic light crossing scenario, while 81% favored haptic feedback in the vehicle crossing scenario. These contrasting preferences across different street-crossing contexts suggest that combining both feedback types may offer complementary benefits for comprehensive crossing assistance. In addition to preference data, we analyzed performance metrics to further understand the optimal implementation approaches. Although users’ overall decision times were shorter in static states during Study 2, further analysis showed no significant time difference between dynamic and static modes for haptic feedback ( p > 0.05 ) , whereas visual feedback was significantly faster under static conditions than dynamic ( p < 0.001 ) . Consequently, haptic feedback may adopt either dynamic or static approaches based on user preference, whereas visual feedback under static conditions appears optimal for efficient decision-making regarding vehicle movement.

5.2. Analysis of Confidence Levels in Relation to Decision Time and Crossing Behavior

In both Study 1 and Study 2, we conducted in-depth correlation analyses between users’ decision confidence and crossing frequencies (see Table 4 and Table 5). Focusing on complete-confidence cases (confidence = 100%), we found that the visual feedback group consistently surpassed the haptic feedback group in generating fully confident decisions. The visual feedback group made 73 such decisions in Study 1, showing a 21.67% increase compared to the haptic feedback group’s 60 instances, and reached 180 fully confident decisions in Study 2 (90 Dynamic, 90 Static), exceeding the haptic feedback group’s 165 (87 Dynamic, 78 Static) by about 9.1%. In addition, the dynamic condition within the haptic feedback group effectively boosted user confidence, whereas the visual feedback group showed no such difference between static and dynamic modes. As the feedback frequency rose in both the signal-controlled (Study 1) and vehicle crossing (Study 2) scenarios, users receiving visual feedback often chose to wait. This pattern likely arises from the clarity of visual feedback [27], wherein higher frequencies heighten environmental awareness and encourage more cautious decisions. However, environmental conditions such as intense sunlight may reduce the perceptibility of visual cues [27], potentially diminishing user confidence in visual feedback under certain circumstances.
From the perspective of confidence levels, the decision time data were analyzed by categorizing confidence into three levels: full confidence (100%), high confidence (90–99%), and low confidence (<90%). In Study 1 (see Table 6), we examined the effect of different confidence levels on decision times in the signal-controlled crossing scenario. The results showed that decision times (see Table 7) for haptic feedback (full confidence: 3.65 s < high confidence: 4.24 s < low confidence: 4.79 s) were generally longer than those for visual feedback (full confidence: 3.45 s < low confidence: 3.82 s < high confidence: 4.15 s). This result indicates that visual feedback supports faster decisions across all confidence levels. In particular, when users lack confidence, high-frequency flashing in visual feedback appears to generate a stronger sense of urgency, prompting quicker action.
In Study 2, we examined both dynamic and static conditions in the vehicle crossing scenario (see Table 8). Under dynamic conditions (Table 7), both feedback modalities showed the longest decision times at full confidence: for haptic feedback (high confidence: 2.88 s < low confidence: 3.27 s < full confidence: 3.53 s) and visual feedback (low confidence: 4.32 s < high confidence: 4.59 s < full confidence: 4.79 s). Moreover, at the same confidence level, haptic feedback yielded shorter decision times than visual feedback, possibly owing to dynamic vibration cues that intuitively signal impending hazards [93]. Under static conditions, haptic feedback decision times remained shorter once users reached high confidence (low confidence: 2.99 s < high confidence: 3.10 s < full confidence: 3.26 s), while visual feedback exhibited (low confidence: 2.89 s < full confidence: 3.36 s < high confidence: 3.48 s). These findings show that, at or above high confidence, haptic feedback consistently enabled briefer decision times than visual feedback. The differing impact of confidence levels in Studies 1 and 2 thus reflects unique traffic environments: while fixed light signals give clearer crossing windows [5], vehicle-based judgments demand more cautious strategies [6], leading confident users to take additional time before proceeding.

5.3. Implications for Haptic and Visual Feedback Design

Our study highlights the distinct advantages of haptic and visual feedback in different street-crossing contexts: visual feedback enhances decision efficiency at traffic lights, while haptic feedback proves especially useful in vehicle-based scenarios. Based on these findings, we offer the following design recommendations:
  • Visual feedback significantly improved decision-making efficiency, conveying urgency that enabled users to act quickly even at lower confidence levels. Designers targeting individuals with light perception should leverage visual cues to foster timely and effective decisions.
  • Haptic feedback markedly enhanced decision efficiency, particularly for users with higher confidence, who could respond more rapidly than with visual feedback. Assistive systems requiring quick, confident decisions should prioritize haptic feedback to facilitate swift user reactions.
  • Visual feedback induces more cautious behavior among users, owing to its stronger sense of urgency, particularly under high-frequency cues. This approach is beneficial for scenarios that demand heightened risk awareness, such as crossing busy multi-lane roads.
  • Across both scenarios, a notable user preference emerged for haptic feedback, with participants citing comfort and intuitive information delivery. Given the respective strengths of both modalities, assistive systems should allow users to select their preferred method based on personal needs and specific crossing circumstances.

6. Conclusions

This study investigated the effectiveness of haptic and visual feedback in assisting visually impaired individuals with safe street-crossing decisions. Through two controlled experiments, we examined how different feedback modalities influenced decision-making efficiency, user confidence, and perceived urgency across distinct crossing scenarios: traffic signal-controlled intersections and vehicle-based crossings. Key findings indicate that, in signal-controlled crossings, visual feedback significantly reduced decision time, enabling quicker and more confident decisions, though some users experienced perceptual fatigue. Haptic feedback, while slightly slower, was preferred for its intuitive and comfortable nature. In vehicle-based crossings, haptic feedback was more effective, particularly in dynamic conditions, enhancing situational awareness and enabling faster decisions.
These findings underscore the importance of tailoring assistive feedback to specific street-crossing environments. Future research should explore hybrid feedback systems that integrate both modalities to optimize safety and user experience. We acknowledge the limitations of our study, particularly the use of simulated environments, rather than real-world settings and the lack of participants with actual visual impairments. Future studies should expand to real-world urban environments with larger, more diverse participant groups, particularly individuals with varying degrees of visual impairment, including those with residual light perception. Field testing should occur across multiple intersection types under various traffic and environmental conditions, followed by longitudinal studies to assess adaptation and long-term benefits. This ecological approach would significantly enhance the system’s validation and generalizability. By leveraging multimodal feedback systems, we can significantly improve the mobility independence and safety of visually impaired pedestrians across various urban settings.

Author Contributions

Conceptualization, G.R., J.H.L. and G.W.; methodology, G.R. and T.H.; software, G.R.; validation, T.H. and J.H.L.; formal analysis, Z.H.; investigation, Z.H. and W.L.; resources, G.W.; data curation, Z.H.; writing—original draft preparation, G.R. and Z.H.; writing—review and editing, G.R., J.H.L. and G.W.; visualization, W.L.; supervision, G.W.; project administration, G.R. and G.W.; funding acquisition, G.W. and J.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by (1) the Korea Institute of Police Technology (KIPoT; Police Lab 2.0 program) grant funded by MSIT (RS-2023-00281194), (2) a research grant (2024-0035) funded by HAII Corporation, (3) the Fujian Province Social Science Foundation Project (No. FJ2025MGCA042), and (4) the 2024 Fujian Provincial Lifelong Education Quality Improvement Project (No. ZS24005).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of School of Design Arts, Xiamen University of Technology (Approval Number: XMUT-SDA-IRB-2024-10/043, approval date: 15 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data are contained within the manuscript. Raw data are available from the corresponding author upon request.

Acknowledgments

We appreciate all participants who took part in the studies.

Conflicts of Interest

Author Jee Hang Lee was employed by the company Institute for Advanced Intelligence Study. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-Dimensional
IoTInternet of Things
V2XVehicle-to-Everything
MQTTMessage Queuing Telemetry Transport
GEEsGeneralized Estimating Equations

References

  1. World Health Organization. World Report on Vision; Technical Report; World Health Organization: Geneva, Switzerland, 2019. [Google Scholar]
  2. Weiland, J.; Humayun, M. Intraocular retinal prosthesis. IEEE Eng. Med. Biol. Mag. 2006, 25, 60–66. [Google Scholar] [CrossRef] [PubMed]
  3. Morley, J.W.; Chowdhury, V.; Coroneo, M.T. Visual Cortex and Extraocular Retinal Stimulation with Surface Electrode Arrays. In Visual Prosthesis and Ophthalmic Devices; Humana Press: Totowa, NJ, USA, 2007; pp. 159–171. [Google Scholar] [CrossRef]
  4. Tian, S.; Zheng, M.; Zou, W.; Li, X.; Zhang, L. Dynamic Crosswalk Scene Understanding for the Visually Impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1478–1486. [Google Scholar] [CrossRef]
  5. Huang, C.Y.; Wu, C.K.; Liu, P.Y. Assistive technology in smart cities: A case of street crossing for the visually-impaired. Technol. Soc. 2022, 68, 101805. [Google Scholar] [CrossRef]
  6. Hassan, S.E. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions? Investig. Ophthalmol. Vis. Sci. 2012, 53, 2593–2600. [Google Scholar] [CrossRef] [PubMed]
  7. Geruschat, D.R.; Hassan, S.E. Driver behavior in yielding to sighted and blind pedestrians at roundabouts. J. Vis. Impair. Blind. 2005, 99, 286–302. [Google Scholar] [CrossRef]
  8. Ihejimba, C.; Wenkstern, R.Z. DetectSignal: A Cloud-Based Traffic Signal Notification System for the Blind and Visually Impaired. In Proceedings of the 2020 IEEE International Smart Cities Conference (ISC2), Piscataway, NJ, USA, 28 September–1 October 2020; pp. 1–6, ISBN 9781728182940. [Google Scholar] [CrossRef]
  9. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Takagi, H.; Asakawa, C. NavCog: A navigational cognitive assistant for the blind. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; pp. 90–99, ISBN 9781450344081. [Google Scholar] [CrossRef]
  10. Khan, A.; Khusro, S. An insight into smartphone-based assistive solutions for visually impaired and blind people: Issues, challenges and opportunities. Univers. Access Inf. Soc. 2021, 20, 265–298. [Google Scholar] [CrossRef]
  11. Budrionis, A.; Plikynas, D.; Daniušis, P.; Indrulionis, A. Smartphone-based computer vision travelling aids for blind and visually impaired individuals: A systematic review. Assist. Technol. 2022, 34, 178–194. [Google Scholar] [CrossRef]
  12. See, A.R.; Sasing, B.G.; Advincula, W.D. A smartphone-based mobility assistant using depth imaging for visually impaired and blind. Appl. Sci. 2022, 12, 2802. [Google Scholar] [CrossRef]
  13. Dhod, R.; Singh, G.; Singh, G.; Kaur, M. Low cost GPS and GSM based navigational aid for visually impaired people. Wirel. Pers. Commun. 2017, 92, 1575–1589. [Google Scholar] [CrossRef]
  14. Bhatnagar, A.; Ghosh, A.; Florence, S.M. Android integrated voice based walking stick for blind with obstacle recognition. In Proceedings of the 2022 Fourth International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT), Mandya, India, 26–27 December 2022; pp. 1–4, ISBN 9781665456357. [Google Scholar] [CrossRef]
  15. Mai, C.; Chen, H.; Zeng, L.; Li, Z.; Liu, G.; Qiao, Z.; Qu, Y.; Li, L.; Li, L. A smart cane based on 2D LiDAR and RGB-D camera sensor-realizing navigation and obstacle recognition. Sensors 2024, 24, 870. [Google Scholar] [CrossRef]
  16. Guerreiro, J.; Sato, D.; Asakawa, S.; Dong, H.; Kitani, K.M.; Asakawa, C. CaBot: Designing and evaluating an autonomous navigation robot for blind people. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 28–30 October 2019; pp. 68–82, ISBN 9781450366762. [Google Scholar] [CrossRef]
  17. Wang, L.; Chen, Q.; Zhang, Y.; Li, Z.; Yan, T.; Wang, F.; Zhou, G.; Gong, J. Can Quadruped Guide Robots be Used as Guide Dogs? In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 4094–4100, ISBN 9781665491907. [Google Scholar] [CrossRef]
  18. Chen, Y.; Xu, Z.; Jian, Z.; Tang, G.; Yang, L.; Xiao, A.; Wang, X.; Liang, B. Quadruped guidance robot for the visually impaired: A comfort-based approach. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 12078–12084, ISBN 9798350323658. [Google Scholar] [CrossRef]
  19. National Cooperative Highway Research Program; Transportation Research Board; National Academies of Sciences, Engineering, and Medicine. Accessible Pedestrian Signals: A Guide to Best Practices (Workshop Edition 2010); The National Academies Press: Washington, DC, USA, 2011; p. 22902. [Google Scholar] [CrossRef]
  20. Manchikanti, N.; Kumar, G.S.; Vidyadhar, R.; Dendi, P.; Yarabolu, V. Innovation in Public Transportation and Improving Accessibility for the Blind. In Proceedings of the 2023 International Conference on Circuit Power and Computing Technologies (ICCPCT), Kollam, India, 10–11 August 2023; pp. 125–129, ISBN 9798350333244. [Google Scholar] [CrossRef]
  21. Geruschat, D.R.; Fujiwara, K.; Wall Emerson, R.S. Traffic gap detection for pedestrians with low vision. Optom. Vis. Sci. 2011, 88, 208–216. [Google Scholar] [CrossRef] [PubMed]
  22. Uematsu, A.; Inoue, K.; Hobara, H.; Kobayashi, H.; Iwamoto, Y.; Hortobágyi, T.; Suzuki, S. Preferred step frequency minimizes veering during natural human walking. Neurosci. Lett. 2011, 505, 291–293. [Google Scholar] [CrossRef] [PubMed]
  23. Bharadwaj, A.; Shaw, S.B.; Goldreich, D. Comparing tactile to auditory guidance for blind individuals. Front. Hum. Neurosci. 2019, 13, 443. [Google Scholar] [CrossRef]
  24. Jones, L.A.; Sarter, N.B. Tactile displays: Guidance for their design and application. Hum. Factors J. Hum. Factors Ergon. Soc. 2008, 50, 90–111. [Google Scholar] [CrossRef]
  25. Gustafson-Pearce, O.; Billett, E.; Cecelja, F. Comparison between audio and tactile systems for delivering simple navigational information to visually impaired pedestrians. Br. J. Vis. Impair. 2007, 25, 255–265. [Google Scholar] [CrossRef]
  26. Ross, D.A.; Blasch, B.B. Wearable interfaces for orientation and wayfinding. In Proceedings of the Fourth International ACM Conference on Assistive Technologies, Arlington, VA, USA, 13–15 November 2000; pp. 193–200. [Google Scholar] [CrossRef]
  27. Yang, C.; Xu, S.; Yu, T.; Liu, G.; Yu, C.; Shi, Y. LightGuide: Directing Visually Impaired People Along a Path Using Light Cues. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–27. [Google Scholar] [CrossRef]
  28. Hicks, S.L.; Wilson, I.; Muhammed, L.; Worsfold, J.; Downes, S.M.; Kennard, C. A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals. PLoS ONE 2013, 8, e67695. [Google Scholar] [CrossRef]
  29. Xu, S.; Yang, C.; Ge, W.; Yu, C.; Shi, Y. Virtual paving: Rendering a smooth path for people with visual impairment through vibrotactile and audio feedback. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–25. [Google Scholar] [CrossRef]
  30. Adebiyi, A.; Sorrentino, P.; Bohlool, S.; Zhang, C.; Arditti, M.; Goodrich, G.; Weiland, J.D. Assessment of feedback modalities for wearable visual aids in blind mobility. PLoS ONE 2017, 12, e0170531. [Google Scholar] [CrossRef]
  31. Flaxman, S.R.; Bourne, R.R.A.; Resnikoff, S.; Ackland, P.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.H.; et al. Global causes of blindness and distance vision impairment 1990–2020: A systematic review and meta-analysis. Lancet Glob. Health 2017, 5, e1221–e1234. [Google Scholar] [CrossRef]
  32. Colenbrander, A. Visual functions and functional vision. Int. Congr. Ser. 2005, 1282, 482–486. [Google Scholar] [CrossRef]
  33. Pascolini, D.; Mariotti, S.P. Global estimates of visual impairment: 2010. Br. J. Ophthalmol. 2012, 96, 614–618. [Google Scholar] [CrossRef]
  34. Zhao, Y.; Hu, M.; Hashash, S.; Azenkot, S. Understanding low vision people’s visual perception on commercial augmented reality glasses. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, New York, NY, USA, 6–11 May 2017; pp. 4170–4181. [Google Scholar] [CrossRef]
  35. Ashmead, D.H.; Guth, D.; Wall, R.S.; Long, R.G.; Ponchillia, P.E. Street crossing by sighted and blind pedestrians at a modern roundabout. J. Transp. Eng. 2005, 131, 812–821. [Google Scholar] [CrossRef]
  36. Ghilardi, M.C.; Simoes, G.; Wehrmann, J.; Manssour, I.H.; Barros, R.C. Real-time detection of pedestrian traffic lights for visually-impaired people. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8, ISBN 9781509060146. [Google Scholar] [CrossRef]
  37. Montanha, A.; Oprescu, A.M.; Romero-Ternero, M. A context-aware artificial intelligence-based system to support street crossings for pedestrians with visual impairments. Appl. Artif. Intell. 2022, 36, 2062818. [Google Scholar] [CrossRef]
  38. Li, X.; Cui, H.; Rizzo, J.R.; Wong, E.; Fang, Y. Cross-safe: A computer vision-based approach to make all intersection-related pedestrian signals accessible for the visually impaired. In Advances in Computer Vision; Springer International Publishing: Cham, Switzerland, 2020; Volume 944, pp. 132–146. ISBN 9783030177973. [Google Scholar] [CrossRef]
  39. Chen, L.B.; Pai, W.Y.; Chen, W.H.; Huang, X.R. iDog: An intelligent guide dog harness for visually impaired pedestrians based on artificial intelligence and edge computing. IEEE Sens. J. 2024, 24, 41997–42008. [Google Scholar] [CrossRef]
  40. Kaul, O.B.; Rohs, M. HapticHead: 3D guidance and target acquisition through a vibrotactile grid. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 2533–2539, ISBN 9781450340823. [Google Scholar] [CrossRef]
  41. Xiao, J. Haptic Feedback Research of Human-Computer Interaction in Human-Machine Shared Control Context of Smart Cars. In Proceedings of the HCI International 2023 Posters, Copenhagen, Denmark, 23–28 July 2023. [Google Scholar] [CrossRef]
  42. Spelmezan, D.; Jacobs, M.; Hilgers, A.; Borchers, J. Tactile motion instructions for physical activities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 2243–2252, ISBN 9781605582467. [Google Scholar] [CrossRef]
  43. Turmo Vidal, L.; Márquez Segura, E.; Parrilla Bel, L.; Waern, A. Exteriorizing Body Alignment in Collocated Physical Training. In Proceedings of the Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–6, ISBN 9781450356213. [Google Scholar] [CrossRef]
  44. Hayward, V.; Maclean, K.E. Do it yourself haptics: Part I. IEEE Robot. Autom. Mag. 2007, 14, 88–104. [Google Scholar] [CrossRef]
  45. Cholewiak, R.W.; Brill, J.C.; Schwab, A. Vibrotactile localization on the abdomen: Effects of place and space. Percept. Psychophys. 2004, 66, 970–987. [Google Scholar] [CrossRef]
  46. Van Erp, J.B.; Werkhoven, P.; Werkhoven, P. Validation of Principles for Tactile Navigation Displays. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 1687–1691. [Google Scholar] [CrossRef]
  47. Weber, B.; Schatzle, S.; Hulin, T.; Preusche, C.; Deml, B. Evaluation of a vibrotactile feedback device for spatial guidance. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 349–354, ISBN 9781457702990. [Google Scholar] [CrossRef]
  48. Wang, Y.; Kuchenbecker, K.J. HALO: Haptic alerts for low-hanging obstacles in white cane navigation. In Proceedings of the 2012 IEEE Haptics Symposium (HAPTICS), Vancouver, BC, Canada, 4–7 March 2012; pp. 527–532, ISSN 2324-7355. [Google Scholar] [CrossRef]
  49. Hertel, J.; Schaare, A.; Feuerbach, P.; Ariza, O.; Steinicke, F. STIC—Sensory and tactile improved cane. In Proceedings of the Mensch und Computer 2019, Hamburg, Germany, 8–11 September 2019; pp. 765–769, ISBN 9781450371988. [Google Scholar] [CrossRef]
  50. Shah, C.; Bouzit, M.; Youssef, M.; Vasquez, L. Evaluation of RU-netra - tactile feedback navigation system for the visually impaired. In Proceedings of the 2006 International Workshop on Virtual Rehabilitation, New York, NY, USA, 29–30 August 2006; pp. 72–77, ISBN 9781424402809. [Google Scholar] [CrossRef]
  51. Flores, G.; Kurniawan, S.; Manduchi, R.; Martinson, E.; Morales, L.M.; Sisbot, E.A. Vibrotactile guidance for wayfinding of blind walkers. IEEE Trans. Haptics 2015, 8, 306–317. [Google Scholar] [CrossRef]
  52. Cosgun, A.; Sisbot, E.A.; Christensen, H.I. Evaluation of rotational and directional vibration patterns on a tactile belt for guiding visually impaired people. In Proceedings of the 2014 IEEE Haptics Symposium (HAPTICS), Houston, TX, USA, 23–26 February 2014; pp. 367–370, ISBN 9781479931316. [Google Scholar] [CrossRef]
  53. Katzschmann, R.K.; Araki, B.; Rus, D. Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 583–593. [Google Scholar] [CrossRef]
  54. Ni, D.; Wang, L.; Ding, Y.; Zhang, J.; Song, A.; Wu, J. The design and implementation of a walking assistant system with vibrotactile indication and voice prompt for the visually impaired. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 2721–2726, ISBN 9781479927449. [Google Scholar] [CrossRef]
  55. Pradeep, V.; Medioni, G.; Weiland, J. Robot vision for the visually impaired. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 15–22, ISBN 9781424470297. [Google Scholar] [CrossRef]
  56. Pradeep, V.; Medioni, G.; Weiland, J. A wearable system for the visually impaired. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 6233–6236, ISBN 9781424441235. [Google Scholar] [CrossRef]
  57. Han, S.B.; Kim, D.H.; Kim, J.H. Fuzzy gaze control-based navigational assistance system for visually impaired people in a dynamic indoor environment. In Proceedings of the 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, Turkey, 2–5 August 2015; pp. 1–7, ISBN 9781467374286. [Google Scholar] [CrossRef]
  58. Bourbakis, N.; Keefer, R.; Dakopoulos, D.; Esposito, A. A multimodal interaction scheme between a blind user and the tyflos assistive prototype. In Proceedings of the 2008 20th IEEE International Conference on Tools with Artificial Intelligence, Dayton, OH, USA, 3–5 November 2008; pp. 487–494, ISBN 9780769534404. [Google Scholar] [CrossRef]
  59. Dakopoulos, D.; Bourbakis, N. Towards a 2D tactile vocabulary for navigation of blind and visually impaired. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; pp. 45–51, ISBN 9781424427932. [Google Scholar] [CrossRef]
  60. De Jesus Oliveira, V.A.; Nedel, L.; Maciel, A.; Brayda, L. Localized magnification in vibrotactile HMDs for accurate spatial awareness. In Haptics: Perception, Devices, Control, and Applications; Lecture Notes in Computer Science; Springer Science & Business Media: Cham, Switzerland, 2016; Volume 9775, pp. 55–64. [Google Scholar] [CrossRef]
  61. De Jesus Oliveira, V.A.; Nedel, L.; Maciel, A.; Brayda, L. Anti-veering vibrotactile HMD for assistance of blind pedestrians, In Haptics: Science, Technology, and Applications Series; Lecture Notes in Computer Science; Springer Science & Business Media: Cham, Switzerland, 2018; Volume 10894, pp. 500–512. [Google Scholar] [CrossRef]
  62. Vorapatratorn, S.; Teachavorasinskun, K. iSonar-2: Obstacle warning device, the assistive technology integrated with universal design for the blind. In Proceedings of the 11th International Convention on Rehabilitation Engineering and Assistive Technology, Midview City, Singapore, 25–28 July 2017; pp. 1–4. [Google Scholar]
  63. Velázquez, R.; Bazán, O. Preliminary evaluation of podotactile feedback in sighted and blind users. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 2103–2106, ISBN 9781424441235. [Google Scholar] [CrossRef]
  64. Schaack, S.; Chernyshov, G.; Ragozin, K.; Tag, B.; Peiris, R.; Kunze, K. Haptic collar: Vibrotactile feedback around the neck for guidance applications. In Proceedings of the 10th Augmented Human International Conference 2019, Reims, France, 11–12 March 2019; pp. 1–4, ISBN 9781450365475. [Google Scholar] [CrossRef]
  65. Liu, G.; Yu, T.; Yu, C.; Xu, H.; Xu, S.; Yang, C.; Wang, F.; Mi, H.; Shi, Y. Tactile compass: Enabling visually impaired people to follow a path with continuous directional feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13, ISBN 9781450380966. [Google Scholar] [CrossRef]
  66. Leporini, B.; Raucci, M.; Rosellini, M.; Forgione, N. Towards a haptic-based virtual cane to assist blind people in obstacle detection. In Proceedings of the PETRA’23: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 5–7 July 2023. [Google Scholar]
  67. Erp, J.B.F.V.; Paul, K.I.; Mioch, T. Tactile working memory capacity of users who are blind in an electronic travel aid application with a vibration belt. ACM Trans. Access. Comput. 2020, 13, 1–14. [Google Scholar] [CrossRef]
  68. Lee, M.; In, H. Novel wrist-worn vibrotactile device for providing multi-categorical information for orientation and mobility of the blind and visually impaired. IEEE Access 2023, 11, 111860–111874. [Google Scholar] [CrossRef]
  69. Jones, T.; Troscianko, T. Mobility performance of low-vision adults using an electronic mobility aid. Clin. Exp. Optom. 2006, 89, 10–17. [Google Scholar] [CrossRef]
  70. van Rheede, J.J.; Wilson, I.R.; Qian, R.I.; Downes, S.M.; Kennard, C.; Hicks, S.L. Improving mobility performance in low vision with a distance-based representation of the visual scene. Investig. Ophthalmol. Vis. Sci. 2015, 56, 4802–4809. [Google Scholar] [CrossRef]
  71. Kinateder, M.; Gualtieri, J.; Dunn, M.J.; Jarosz, W.; Yang, X.D.; Cooper, E.A. Using an augmented reality device as a distance-based vision aid—Promise and limitations. Optom. Vis. Sci. 2018, 95, 727–737. [Google Scholar] [CrossRef] [PubMed]
  72. Fox, D.R.; Ahmadzada, A.; Wang, C.T.; Azenkot, S.; Chu, M.A.; Manduchi, R.; Cooper, E.A. Using augmented reality to cue obstacles for people with low vision. Opt. Express 2023, 31, 6827. [Google Scholar] [CrossRef]
  73. Zhao, Y.; Kupferstein, E.; Rojnirun, H.; Findlater, L.; Azenkot, S. The effectiveness of visual and audio wayfinding guidance on smartglasses for people with low vision. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14, ISBN 9781450367080. [Google Scholar] [CrossRef]
  74. Huang, J.; Kinateder, M.; Dunn, M.J.; Jarosz, W.; Yang, X.D.; Cooper, E.A. An augmented reality sign-reading assistant for users with reduced vision. PLoS ONE 2019, 14, e0210630. [Google Scholar] [CrossRef]
  75. Zhao, Y.; Kupferstein, E.; Castro, B.V.; Feiner, S.; Azenkot, S. Designing AR visualizations to facilitate stair navigation for people with low vision. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019; pp. 387–402, ISBN 9781450368162. [Google Scholar] [CrossRef]
  76. Zhao, Y.; Szpiro, S.; Azenkot, S. ForeSee: A customizable head-mounted vision enhancement system for people with low vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility—ASSETS’15, Lisbon, Portugal, 26–28 October 2015; pp. 239–249, ISBN 9781450334006. [Google Scholar] [CrossRef]
  77. Angelopoulos, A.N.; Ameri, H.; Mitra, D.; Humayun, M. Enhanced depth navigation through augmented reality depth mapping in patients with low vision. Sci. Rep. 2019, 9, 11230. [Google Scholar] [CrossRef]
  78. Katemake, P.; Radsamrong, A.; Dinet, É.; Heng, C.W.; Kuang, Y.C.; Kalavally, V.; Trémeau, A. Influence of LED-based assistive lighting solutions on the autonomous mobility of low vision people. Build. Environ. 2019, 157, 172–184. [Google Scholar] [CrossRef]
  79. Ross, R.D. Is perception of light useful to the blind patient? Arch. Ophthalmol. 1998, 116. [Google Scholar] [CrossRef]
  80. Karuei, I.; MacLean, K.E.; Foley-Fisher, Z.; MacKenzie, R.; Koch, S.; El-Zohairy, M. Detecting vibrations across the body in mobile contexts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 3267–3276, ISBN 9781450302289. [Google Scholar] [CrossRef]
  81. Steltenpohl, H.; Bouwer, A. Vibrobelt: Tactile navigation support for cyclists. In Proceedings of the 2013 international Conference on Intelligent User Interfaces, Santa Monica, CA, USA, 19–22 March 2013; pp. 417–426. [Google Scholar] [CrossRef]
  82. Woźniak, M.P.; Dominiak, J.; Pieprzowski, M.; Ładoński, P.; Grudzień, K.; Lischke, L.; Romanowski, A.; Woźniak, P.W. Subtletee: Augmenting Posture Awareness for Beginner Golfers. Proc. ACM Hum. Comput. Interact. 2020, 4, 1–24. [Google Scholar] [CrossRef]
  83. Salzer, Y.; Oron-Gilad, T.; Ronen, A. Vibrotactor-Belt on the Thigh – Directions in the Vertical Plane. In Haptics: Generating and Perceiving Tangible Sensations; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6192, pp. 359–364. [Google Scholar] [CrossRef]
  84. Kim, J.; Kim, H.; Park, C.; Choi, S. Human Recognition Performance of Simple Spatial Vibrotactile Patterns on the Torso. In Proceedings of the 2023 IEEE World Haptics Conference (WHC), Delft, The Netherlands, 10–13 July 2023; pp. 20–27, ISBN 9798350399936. [Google Scholar] [CrossRef]
  85. Park, W.; Alsuradi, H.; Eid, M. EEG correlates to perceived urgency elicited by vibration stimulation of the upper body. Sci. Rep. 2024, 14, 14267. [Google Scholar] [CrossRef] [PubMed]
  86. Palomares, N.M.C.; Romero, G.B.; Victor, J.L.A. Assessment of User Interpretation on Various Vibration Signals in Mobile Phones. In Proceedings of the Advances in Neuroergonomics and Cognitive Engineering, Washington, DC, USA, 24–28 July 2019; Ayaz, H., Ed.; Springer: Cham, Switzerland, 2020; pp. 500–511. [Google Scholar] [CrossRef]
  87. Saket, B.; Prasojo, C.; Huang, Y.; Zhao, S. Designing an effective vibration-based notification interface for mobile phones. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 149–1504, ISBN 9781450313315. [Google Scholar] [CrossRef]
  88. Schömbs, S.; Pareek, S.; Goncalves, J.; Johal, W. Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and Embodiment. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, CHI’24, Honolulu, HI, USA, 11–16 May 2024; pp. 1–16. [Google Scholar] [CrossRef]
  89. Hanley, J.A. Statistical Analysis of Correlated Data Using Generalized Estimating Equations: An Orientation. Am. J. Epidemiol. 2003, 157, 364–375. [Google Scholar] [CrossRef]
  90. Ballinger, G.A. Using generalized estimating equations for longitudinal data analysis. Organ. Res. Methods 2004, 7, 127–150. [Google Scholar] [CrossRef]
  91. Chen, J.; Li, N.; Shi, Y.; Du, J. Cross-cultural assessment of the effect of spatial information on firefighters’ wayfinding performance: A virtual reality-based study. Int. J. Disaster Risk Reduct. 2023, 84, 103486. [Google Scholar] [CrossRef]
  92. Israr, A.; Poupyrev, I. Tactile brush: Drawing on skin with a tactile grid display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2019–2028, ISBN 9781450302289. [Google Scholar] [CrossRef]
  93. Meng, F.; Gray, R.; Ho, C.; Ahtamad, M.; Spence, C. Dynamic vibrotactile signals for forward collision avoidance warning systems. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 329–346. [Google Scholar] [CrossRef]
  94. Schneider, O.S.; Israr, A.; MacLean, K.E. Tactile Animation by Direct Manipulation of Grid Displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, Charlotte, NC, USA, 8–11 November 2015; pp. 21–30, ISBN 9781450337793. [Google Scholar] [CrossRef]
  95. Israr, A.; Poupyrev, I. Exploring surround haptics displays. In Proceedings of the CHI’10 Extended Abstracts on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 4171–4176, ISBN 9781605589305. [Google Scholar] [CrossRef]
  96. Israr, A.; Kim, S.C.; Stec, J.; Poupyrev, I. Surround haptics: Tactile feedback for immersive gaming experiences. In Proceedings of the CHI ’12 Extended Abstracts on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 1087–1090, ISBN 9781450310161. [Google Scholar] [CrossRef]
  97. Chien, H.P.; Wu, M.C.; Hsu, C.C. Flowing-Haptic Sleeve: Research on Apparent Tactile Motion Applied to Simulating the Feeling of Flow on the Arm. In Proceedings of the Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers, Virtually, 21–26 September 2021; pp. 550–554, ISBN 9781450384612. [Google Scholar] [CrossRef]
Figure 1. Overview of haptic and visual feedback systems.
Figure 1. Overview of haptic and visual feedback systems.
Applsci 15 03942 g001
Figure 2. Frequency patterns for haptic and visual feedback. 1 represents 100% intensity vibration or light on, while 0 indicates no vibration or light off. (a) 0.5 Hz, (b) 1.0 Hz, (c) 2.0 Hz, and (d) 4.0 Hz.
Figure 2. Frequency patterns for haptic and visual feedback. 1 represents 100% intensity vibration or light on, while 0 indicates no vibration or light off. (a) 0.5 Hz, (b) 1.0 Hz, (c) 2.0 Hz, and (d) 4.0 Hz.
Applsci 15 03942 g002
Figure 3. Configurations of haptic and visual feedback size. (a) Haptic stimulus size; (b) visual stimulus size.
Figure 3. Configurations of haptic and visual feedback size. (a) Haptic stimulus size; (b) visual stimulus size.
Applsci 15 03942 g003
Figure 4. Mean cross rate and decision time. (a) Mean cross rate; (b) mean decision time.
Figure 4. Mean cross rate and decision time. (a) Mean cross rate; (b) mean decision time.
Applsci 15 03942 g004
Figure 5. The mean head rotation angle and head movement distance result of Study 1. (a) Mean head rotation angle; (b) mean head movement distance.
Figure 5. The mean head rotation angle and head movement distance result of Study 1. (a) Mean head rotation angle; (b) mean head movement distance.
Applsci 15 03942 g005
Figure 6. Mean user-perceived urgency of Study 1.
Figure 6. Mean user-perceived urgency of Study 1.
Applsci 15 03942 g006
Figure 7. Mean user-perceived confidence of Study 1.
Figure 7. Mean user-perceived confidence of Study 1.
Applsci 15 03942 g007
Figure 8. Overview of haptic and visual feedback systems with signal pattern types.
Figure 8. Overview of haptic and visual feedback systems with signal pattern types.
Applsci 15 03942 g008
Figure 9. Haptic and Visual Feedback in Study 2. (a) Dynamic feedback design (right direction): stimulation transitions from position 1 to position 2, with an equal duration at each position. (b) Static feedback design (right direction): using the same stimulation pattern as in Study 1.
Figure 9. Haptic and Visual Feedback in Study 2. (a) Dynamic feedback design (right direction): stimulation transitions from position 1 to position 2, with an equal duration at each position. (b) Static feedback design (right direction): using the same stimulation pattern as in Study 1.
Applsci 15 03942 g009
Figure 10. Mean cross rate and decision time. (a) Mean cross rate; (b) mean decision time.
Figure 10. Mean cross rate and decision time. (a) Mean cross rate; (b) mean decision time.
Applsci 15 03942 g010
Figure 11. Mean head rotation angle and head movement distance results of Study 2. (a) Mean head rotation angle; (b) mean head movement distance.
Figure 11. Mean head rotation angle and head movement distance results of Study 2. (a) Mean head rotation angle; (b) mean head movement distance.
Applsci 15 03942 g011
Figure 12. Mean user-perceived urgency of Study 2. (a) Haptic feedback urgency; (b) visual feedback urgency.
Figure 12. Mean user-perceived urgency of Study 2. (a) Haptic feedback urgency; (b) visual feedback urgency.
Applsci 15 03942 g012
Figure 13. Mean user-perceived confidence of Study 2. (a) Haptic feedback confidence; (b) visual feedback confidence.
Figure 13. Mean user-perceived confidence of Study 2. (a) Haptic feedback confidence; (b) visual feedback confidence.
Applsci 15 03942 g013
Figure 14. Decision time and cross rate across feedback and frequency of Study 1. (a) Mean decision time for feedback and frequency; (b) mean cross rate for feedback and frequency.
Figure 14. Decision time and cross rate across feedback and frequency of Study 1. (a) Mean decision time for feedback and frequency; (b) mean cross rate for feedback and frequency.
Applsci 15 03942 g014
Figure 15. Decision time and cross rate across feedback and frequency of Study 1. (a) Mean decision time for feedback and size; (b) mean cross rate for feedback and size.
Figure 15. Decision time and cross rate across feedback and frequency of Study 1. (a) Mean decision time for feedback and size; (b) mean cross rate for feedback and size.
Applsci 15 03942 g015
Figure 16. Cross rate across feedback, frequency, and size of Study 2. (a) Mean cross rate for feedback and frequency; (b) mean cross rate for feedback and size.
Figure 16. Cross rate across feedback, frequency, and size of Study 2. (a) Mean cross rate for feedback and frequency; (b) mean cross rate for feedback and size.
Applsci 15 03942 g016
Figure 17. Decision time across the feedback, size, and state of Study 2. (a) Mean decision time for feedback and size; (b) mean decision time for feedback and state.
Figure 17. Decision time across the feedback, size, and state of Study 2. (a) Mean decision time for feedback and size; (b) mean decision time for feedback and state.
Applsci 15 03942 g017
Table 1. Assistive technology applications for visually impaired individuals.
Table 1. Assistive technology applications for visually impaired individuals.
Technology TypeHaptic FeedbackVisual Feedback
Portable devicesSmart canes and handheld devices [49,65,66]LED obstacle highlighting [78]
Wearable devicesVibrating belts, backpacks, and wristbands [29,67,68]LED guides and visual displays [27,28,69]
Specific applicationsStreet-crossing and traffic navigation [25,30]Navigation markers and enhanced visibility [73,74,75]
Table 2. Effects of feedback parameters on user responses in Study 1.
Table 2. Effects of feedback parameters on user responses in Study 1.
Feedback ModalitySizeResponseFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackLargeUrgency Score2.533.123.653.82
Cross Rate25%25%18.75%18.75%
Decision Time4.80 s3.62 s3.41 s3.30 s
MediumUrgency Score1.882.413.003.65
Cross Rate62.5%62.5%37.5%31.25%
Decision Time5.26 s4.80 s4.96 s4.51 s
SmallUrgency Score1.241.592.243.00
Cross Rate81.25%87.5%62.5%31.25%
Decision Time6.25 s4.32 s3.94 s5.06 s
Visual FeedbackLargeUrgency Score2.413.123.764.00
Cross Rate81.25%43.75%18.75%6.25%
Decision Time4.13 s3.99 s3.17 s2.80 s
MediumUrgency Score1.882.413.123.65
Cross Rate87.5%50%12.5%6.25%
Decision Time4.19 s4.04 s3.23 s2.95 s
SmallUrgency Score1.181.882.473.18
Cross Rate87.5%62.5%25%6.25%
Decision Time5.90 s3.99 s3.51 s2.85 s
Note: The urgency score represents users’ perceived urgency level; the cross rate is the percentage of crossing decisions; the decision time indicates the average decision time in seconds. Green-shaded cells denote high cross rates (> 85 % ) suitable for encouraging crossing. Yellow-shaded cells denote low cross rates (< 30 % ) effective for deterring crossing.
Table 3. Effects of feedback parameters on user responses in Study 2.
Table 3. Effects of feedback parameters on user responses in Study 2.
Feedback ModalityStateSize (Direction)ResponseFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackDynamicMedium (Right)Urgency Score2.503.003.694.00
Cross Rate31.25%31.25%6.25%18.75%
Decision Time3.54 s2.77 s2.84 s2.66 s
Medium (Left)Urgency Score2.503.063.634.00
Cross Rate31.25%25%6.25%25%
Decision Time3.42 s3.11 s2.79 s2.94 s
Small (Right)Urgency Score1.442.192.883.69
Cross Rate87.5%68.75%43.75%31.25%
Decision Time4.18 s3.47 s3.02 s2.69 s
Small (Left)Urgency Score1.442.192.883.69
Cross Rate87.5%68.75%43.75%25%
Decision Time4.15 s3.83 s2.67 s2.45 s
StaticMedium (Right)Urgency Score2.192.633.383.81
Cross Rate31.25%31.25%6.25%18.75%
Decision Time3.06 s2.71 s2.95 s2.39 s
Medium (Left)Urgency Score2.252.633.313.75
Cross Rate31.25%31.25%6.25%12.5%
Decision Time3.01 s3.21 s2.79 s2.51 s
Small (Right)Urgency Score1.191.752.753.69
Cross Rate93.75%68.75%43.75%25%
Decision Time4.09 s3.47 s3.00 s2.72 s
Small (Left)Urgency Score1.191.812.753.63
Cross Rate87.5%87.5%43.75%31.25%
Decision Time3.91 s3.23 s2.88 s2.69 s
Visual FeedbackDynamicMedium (Right)Urgency Score2.252.753.384.00
Cross Rate87.5%56.25%0%0%
Decision Time5.08 s4.75 s4.35 s3.57 s
Medium (Left)Urgency Score2.252.753.314.00
Cross Rate81.25%56.25%12.5%0%
Decision Time5.05 s4.72 s4.19 s3.85 s
Small (Right)Urgency Score1.312.002.943.75
Cross Rate93.75%62.5%12.5%6.25%
Decision Time5.55 s4.70 s4.13 s3.85 s
Small (Left)Urgency Score1.382.062.943.75
Cross Rate87.5%62.5%18.75%0%
Decision Time5.24 s5.03 s4.10 s3.63 s
StaticMedium (Right)Urgency Score2.062.883.443.94
Cross Rate81.25%75%12.5%12.5%
Decision Time3.70 s3.52 s2.66 s2.52 s
Medium (Left)Urgency Score2.132.883.383.88
Cross Rate81.25%75%6.25%0%
Decision Time4.03 s3.52 s2.84 s2.45 s
Small (Right)Urgency Score1.131.882.813.38
Cross Rate93.75%87.5%18.75%6.25%
Decision Time4.49 s3.50 s3.06 s2.66 s
Small (Left)Urgency Score1.131.812.813.38
Cross Rate93.75%81.25%12.5%6.25%
Decision Time4.08 s3.46 s3.15 s2.60 s
Note: The urgency score represents users’ perceived urgency level; the cross rate is the percentage of crossing decisions; the decision time indicates the average decision time in seconds. Green-shaded cells denote high cross rates (> 85 % ) , suitable for encouraging crossing. Yellow-shaded cells denote low cross rates (< 30 % ) , effective for deterring crossing.
Table 4. User responses to various feedback parameters in Study 1.
Table 4. User responses to various feedback parameters in Study 1.
Feedback ModalitySizeResponseFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackLargeConfidence = 100%43714
Cross Count (100%)1122
Cross Count4433
MediumConfidence = 100%1039
Cross Count (100%)1024
Cross Count101065
SmallConfidence = 100%6337
Cross Count (100%)5321
Cross Count1314105
Visual FeedbackLargeConfidence = 100%651014
Cross Count (100%)4221
Cross Count13731
MediumConfidence = 100%40211
Cross Count (100%)4000
Cross Count14821
SmallConfidence = 100%5349
Cross Count (100%)5200
Cross Count141031
Note: Confidence = 100% shows the number of users who reported complete confidence in their decisions. The cross count (100%) indicates the number of users who made positive (cross) decisions with 100% confidence. The cross count shows the total number of positive (cross) decisions across all confidence levels.
Table 5. User responses to various feedback parameters in Study 2.
Table 5. User responses to various feedback parameters in Study 2.
Feedback ModalityStateSize (Direction)ResponseFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackDynamicMedium (Right)Confidence = 100%41513
Cross Count (100%)1101
Cross Count5513
Medium (Left)Confidence = 100%41512
Cross Count (100%)1103
Cross Count5414
Small (Right)Confidence = 100%72111
Cross Count (100%)7114
Cross Count141175
Small (Left)Confidence = 100%72111
Cross Count (100%)7114
Cross Count141174
StaticMedium (Right)Confidence = 100%32310
Cross Count (100%)1101
Cross Count5513
Medium (Left)Confidence = 100%32310
Cross Count (100%)1100
Cross Count5512
Small (Right)Confidence = 100%91110
Cross Count (100%)9113
Cross Count151174
Small (Left)Confidence = 100%91110
Cross Count (100%)8114
Cross Count141475
Visual FeedbackDynamicMedium (Right)Confidence = 100%41511
Cross Count (100%)4000
Cross Count14900
Medium (Left)Confidence = 100%41513
Cross Count (100%)4010
Cross Count13920
Small (Right)Confidence = 100%83212
Cross Count (100%)8301
Cross Count151021
Small (Left)Confidence = 100%72111
Cross Count (100%)7300
Cross Count141030
StaticMedium (Right)Confidence = 100%42511
Cross Count (100%)4212
Cross Count131222
Medium (Left)Confidence = 100%42511
Cross Count (100%)3100
Cross Count131210
Small (Right)Confidence = 100%83210
Cross Count (100%)8300
Cross Count151431
Small (Left)Confidence = 100%83210
Cross Count (100%)8200
Cross Count151321
Note: Confidence = 100% shows the number of users who reported complete confidence in their decisions. The cross count (100%) indicates the number of users who made positive (cross) decisions with 100% confidence. The cross count shows the total number of positive (cross) decisions across all confidence levels.
Table 6. Study 1: decision time (s) and confidence levels across different feedback parameters.
Table 6. Study 1: decision time (s) and confidence levels across different feedback parameters.
Feedback ModalitySizeConfidence LevelFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackLarge100%3.532.833.053.36
90–99%6.934.164.523.86
<90%4.653.513.022.63
Medium100%4.052.964.31
90–99%3.244.085.823.00
<90%5.664.965.186.12
Small100%5.202.602.485.73
90–99%4.492.763.864.12
<90%7.155.074.634.87
Visual FeedbackLarge100%4.383.122.862.63
90–99%4.034.053.385.24
<90%4.014.323.652.67
Medium100%3.613.072.67
90–99%4.004.423.495.51
<90%4.463.953.163.07
Small100%5.824.262.912.92
90–99%5.124.203.482.87
<90%6.123.854.032.52
Note: All values are measured in seconds. Missing data are indicated as –.
Table 7. Decision time (s) analysis across confidence levels and feedback types.
Table 7. Decision time (s) analysis across confidence levels and feedback types.
StudyFeedbackStateFull ConfidenceHigh ConfidenceLow Confidence
(100%)(90–99%)(<90%)
1Haptic Feedback 3.654.244.79
Visual Feedback 3.454.153.82
2Haptic FeedbackDynamic3.532.883.27
Static3.263.102.99
Visual FeedbackDynamic4.794.594.32
Static3.363.482.89
Note: each value represents the mean decision time (measured in seconds) calculated from all decisions made under the corresponding feedback type and confidence level.
Table 8. Study 2: decision time (s) and confidence levels across different feedback parameters.
Table 8. Study 2: decision time (s) and confidence levels across different feedback parameters.
Feedback ModalityStateSize (Direction)Confidence LevelFrequency
0.5 Hz1.0 Hz2.0 Hz4.0 Hz
Haptic FeedbackDynamicMedium (Right)100%3.912.733.552.42
90–99%4.162.642.353.67
<90%3.182.802.673.71
Medium (Left)100%2.984.053.722.85
90–99%3.682.732.273.23
<90%3.533.132.463.10
Small (Right)100%3.983.934.522.81
90–99%2.513.252.942.43
<90%4.863.502.912.37
Small (Left)100%3.855.173.622.44
90–99%3.293.292.262.44
<90%4.693.832.922.63
StaticMedium (Right)100%3.332.472.662.34
90–99%4.022.813.282.35
<90%2.352.722.862.62
Medium (Left)100%3.162.963.522.51
90–99%3.433.052.492.23
<90%2.683.332.742.77
Small (Right)100%4.503.624.662.71
90–99%2.943.662.662.68
<90%3.803.323.003.09
Small (Left)100%3.973.453.372.91
90–99%4.243.282.772.27
<90%3.673.172.882.62
Visual FeedbackDynamicMedium (Right)100%4.075.914.023.80
90–99%4.965.255.392.92
<90%5.464.463.763.26
Medium (Left)100%4.836.914.163.85
90–99%4.955.484.364.19
<90%5.134.354.043.32
Small (Right)100%5.604.455.203.96
90–99%4.634.794.073.71
<90%5.794.753.923.35
Small (Left)100%5.374.985.773.79
90–99%6.115.164.253.01
<90%4.795.003.583.35
StaticMedium (Right)100%3.393.472.852.72
90–99%5.373.812.762.07
<90%2.693.372.422.08
Medium (Left)100%3.103.203.192.61
90–99%5.563.682.941.87
<90%3.473.492.522.49
Small (Right)100%4.463.914.282.85
90–99%5.163.213.042.37
<90%3.863.572.812.19
Small (Left)100%4.053.413.942.81
90–99%4.933.572.992.38
<90%3.303.363.071.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, G.; Huang, Z.; Lin, W.; Huang, T.; Wang, G.; Lee, J.H. Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback. Appl. Sci. 2025, 15, 3942. https://doi.org/10.3390/app15073942

AMA Style

Ren G, Huang Z, Lin W, Huang T, Wang G, Lee JH. Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback. Applied Sciences. 2025; 15(7):3942. https://doi.org/10.3390/app15073942

Chicago/Turabian Style

Ren, Gang, Zhihuang Huang, Wenshuo Lin, Tianyang Huang, Gang Wang, and Jee Hang Lee. 2025. "Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback" Applied Sciences 15, no. 7: 3942. https://doi.org/10.3390/app15073942

APA Style

Ren, G., Huang, Z., Lin, W., Huang, T., Wang, G., & Lee, J. H. (2025). Enhancing Street-Crossing Safety for Visually Impaired Pedestrians with Haptic and Visual Feedback. Applied Sciences, 15(7), 3942. https://doi.org/10.3390/app15073942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop