Next Article in Journal
The Blockchain Trust Paradox: Engineered Trust vs. Experienced Trust in Decentralized Systems
Previous Article in Journal
Exploring Sign Language Dataset Augmentation with Generative Artificial Intelligence Videos: A Case Study Using Adobe Firefly-Generated American Sign Language Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems

1
College of Design, National Taipei University of Technology, Taipei 10608, Taiwan
2
School of Design Arts, Xiamen University of Technology, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(9), 800; https://doi.org/10.3390/info16090800
Submission received: 14 August 2025 / Revised: 9 September 2025 / Accepted: 9 September 2025 / Published: 15 September 2025

Abstract

Effectively managing visual search tasks across multiple spatial regions during daily activities such as driving, cycling, and navigating complex environments often overwhelms visual processing capacity, increasing the risk of errors and missed critical information. This study investigates an integrated approach that combines an Ambient Display system utilizing peripheral vision cues with traditional Head-Mounted Displays (HMDs) to enhance spatial search efficiency while minimizing cognitive burden. We systematically evaluated this integrated HMD-Ambient Display system against standalone HMD configurations through comprehensive user studies involving target search scenarios across multiple spatial regions. Our findings demonstrate that the combined approach significantly improves user performance by establishing a complementary visual system where peripheral stimuli effectively capture initial attention while central HMD cues provide precise directional guidance. The integrated system showed substantial improvements in reaction time for rear visual region searches and higher user preference ratings compared with HMD-only conditions. This integrated approach represents an innovative solution that efficiently utilizes dual visual channels, reducing cognitive load while enhancing search efficiency across distributed spatial areas. Our contributions provide valuable design guidelines for developing assistive technologies that improve performance in multi-region visual search tasks by strategically leveraging the complementary strengths of peripheral and central visual processing mechanisms.

1. Introduction

In dynamic environments, individuals frequently need to search for targets across multiple spatial regions, such as monitoring the road ahead while checking blind spots when driving [1], or simultaneously tracking multiple areas when cycling in traffic [2]. The ability to efficiently direct visual attention to different spatial regions is crucial for both safety and performance in such contexts. Peripheral visual cues offer valuable assistance without overwhelming users. For instance, subtle peripheral visual prompts can alert drivers to potential hazards in their blind spots while allowing them to maintain their focus on the forward road view [3,4].
There is a growing demand for assistive technologies that seamlessly integrate into daily activities, delivering essential information without causing distraction or cognitive overload. Peripheral vision, defined as the visual field outside the central focus [5], offers an effective means of conveying such information, enhancing distributed spatial attention and situational awareness [4,6,7]. Prior studies have investigated peripheral visual cues, including arrows [8] and ambient light [4], which guide attention and facilitate target identification beyond the direct line of sight [9].
However, many existing systems depend on proprietary hardware, such as Head-Mounted Displays (HMDs) [10], specialized glasses [4], or in-vehicle devices [11], which can be expensive and limit accessibility. In our previous work [12], we investigated visual and haptic guidance using HMDs and haptic devices to enhance search performance across different spatial regions. Such reliance on HMDs introduces drawbacks, such as higher costs and potential obstruction of the central field of view. This work introduces a novel, cost-effective Ambient Display system that leverages peripheral visual cues combined with other visual feedback, expanding on our prior research to offer a more accessible solution.
Through comprehensive user evaluation, we observed significant benefits of integrating different visual feedback modalities for enhancing performance in multi-region visual search tasks. Our key findings include the following:
  • The HMD-Ambient Display integrated system significantly improved reaction speed for locating targets in rear visual regions while reducing cognitive burden.
  • Users significantly preferred multimodal solutions combining HMD with the Ambient Display when performing distributed visual search tasks.
  • The Ambient Display component offers a cost-effective solution that complements other visual feedback modalities for enhancing visual search across multiple spatial regions.
Based on these findings, we validated the effectiveness of multimodal feedback approaches in enhancing distributed visual attention tasks, particularly through combining Ambient Display with Head-Mounted Displays. Users benefited from the synergistic effects of different visual feedback modalities, achieving more efficient information acquisition through the division of labor between central and peripheral vision. These insights provide design guidance for assistive technologies to enhance attention allocation in safety-critical environments. By leveraging the advantages of peripheral vision while addressing its limitations through strategic integration with other visual feedback modalities, our work contributes to developing more accessible and effective support systems for visual search across multiple spatial regions in daily activities.
In summary, this paper explores peripheral vision assistance and other visual feedback integration in multi-area search tasks. We review related works on peripheral vision systems in Section 2, detail our experimental methodology in Section 3, discuss implications and user preferences in Section 4, and conclude with future research directions for enhancing multi-area search performance in Section 5.

2. Related Works

Visual search tasks represent a fundamental challenge in human spatial cognition, typically requiring people to scan for targets across multiple spatial regions through coordinated eye and head movements to discover objects outside the immediate field of view [13,14]. This challenge becomes particularly pronounced when locating specific objects in complex, cluttered environments where targets may be distributed across various spatial locations [15,16]. Peripheral vision, which extends approximately 10 degrees beyond the central focus area [5], plays a critical role in this process by enabling initial target detection while central vision performs detailed verification [17]. This dual-vision mechanism not only supports daily activities and complex motor tasks [18,19] but also allows us to perceive and monitor potential targets in different spatial locations while maintaining focus on the central visual field [20].
Although peripheral vision exhibits lower detail resolution and reduced clarity when processing color, form, and text [21,22], it demonstrates enhanced sensitivity to brightness changes and motion [23], making it particularly suitable for attention guidance applications. The strategic enhancement of visual saliency through peripheral cues has emerged as a promising approach to improve target identification, accelerating both target detection speed and cognitive processing efficiency [9,24]. Research has demonstrated that peripheral visual cues delivered through technologies such as Head-Mounted Displays can effectively redirect attention to areas outside the main visual focus while users maintain concentration on their central visual field [21], highlighting the potential for developing sophisticated attention guidance systems.
Building on these characteristics, peripheral vision has been effectively utilized to enhance user interaction and situational awareness across various domains. In display technologies, strategic highlighting directs users’ attention to critical information [25], with implementations ranging from HMDs [26] to ambient lighting systems [27]. This approach has proven particularly effective in conveying emergency alerts in medical settings [28]. In traffic safety, peripheral visual cues have been used to enhance drivers’ attention. Matviienko et al. suggested that using light-emitting diodes (LEDs) in the peripheral vision can improve drivers’ ability to maintain attention on the road [29]. In an independent experiment, they found that, compared with handlebar vibrations and LED-equipped helmets, Head-Up Display (HUD) arrows were more distracting. Participants preferred cues from LED-equipped helmets, noting that they could take advantage of peripheral vision without obstructing the primary field of view [30]. Furthermore, Meschtscherjakov et al. revealed that manipulating both frequency and brightness of peripheral LED lights could significantly improve drivers’ perception of speed changes, although the width of the light bands had no significant effect [11].
In sports training and navigation contexts, peripheral visual cues have been integrated to enhance safety and performance. Studies have explored LED-based systems for sports safety [31], directional guidance [32], and navigation tasks [33]. Research on HMDs shows that AR arrows can effectively delineate targets [8], though visual arrow cues are less effective for objects located behind users [34]. While these applications demonstrate peripheral feedback effectiveness in specific domains, they reveal the need for a more comprehensive understanding of cross-regional attention guidance mechanisms.
Multiple studies demonstrate the effectiveness of peripheral visual cues in guiding cross-regional attention distribution. Systems like AmbiGlasses utilize peripheral illumination to provide environmental information without disrupting users’ focus on the central visual field [6]. While peripheral light signals initially enhance perception capabilities across different spatial regions, their effect may weaken over time [4]. Notably, even when attention is diverted, such as when using smartphones, participants can still effectively detect peripheral light stimuli [21]. Visual cues like 3D arrows improve spatial layout memory and navigation performance, better maintaining awareness of object directions compared with 2D maps [35], and increasing search efficiency in outdoor augmented reality, though this advantage may diminish during the final target localization phase [36]. Research with cyclists shows that when needing to attend to multiple spatial regions simultaneously, users respond faster to visual signals such as ambient light and HUD arrows than to vibration or voice prompts [2], although strong lighting conditions may increase the possibility of attention dispersion, affecting visual cue recognition [37]. These findings emphasize the value of peripheral visual cues in enhancing cross-regional perception and spatial navigation, highlighting the importance of factors such as ambient lighting conditions, signal type selection, spatial distribution complexity, and adaptation patterns.
Building on insights from prior research, our work aims to explore the optimization problem of peripheral visual cues for distributed spatial attention tasks. While existing studies have demonstrated the effectiveness of individual visual modalities—either peripheral cues or central guidance systems—there remains a significant gap in understanding how these complementary channels can be systematically integrated to enhance visual search performance. Most prior work focuses on single-modality approaches, lacking a comprehensive evaluation of the synergistic effects between peripheral and central visual feedback systems. By designing systems that leverage different visual signals to enhance peripheral vision feedback and systematically investigating the combined effects of different visual cues, we seek to improve target search efficiency and attention guidance across different spatial regions, addressing the limitations of isolated single-channel approaches and providing effective solutions for navigating complex visual environments.

3. Visual Feedback Improves Multi-Area Target Search Efficiency

This study examines the role of peripheral visual feedback in task execution during target search across multiple spatial regions. The research addresses three primary areas: interaction design, user evaluation, and experimental analysis. Through this investigation, we aim to understand how peripheral vision affects task efficiency when users need to attend to different spatial areas simultaneously.

3.1. Study Design

We here present a detailed evaluation of peripheral vision guidance systems through experimental research. The study examines three critical design elements: the Head-Mounted Display (HMD), the Ambient Display, and their combined integration in the HMD-Ambient Display. The research also offers detailed documentation regarding system development and testing procedures.

3.1.1. Visual Feedback Design

Visual guidance methods, particularly arrows, have proven effective for directing user attention across digital platforms like HMDs and mobile screens [36]. Studies demonstrate that arrows and ambient lighting successfully guide users in locating objects beyond their field of view [35], providing cycling signals [2], and supporting virtual target search [36]. Ambient Display cues have shown value in applications ranging from medical alerts [28] to traffic monitoring [4] and navigation assistance [32].
Our study, therefore, employed arrows and ambient lighting as the primary visual guidance methods. The separate Ambient Display was chosen to maintain a clear distinction between focal and peripheral visual channels, allowing us to examine fundamental differences in how users process guidance information across these distinct visual pathways. The Ambient Display was positioned in the user’s far-peripheral vision at approximately 60 from the center of vision horizontally, specifically placed 2 cm above the user’s central visual focus. This placement falls within the far-peripheral visual field [38], ensuring the cues remained detectable while not interfering with central processing. To prevent information clutter in the central visual field [10], we positioned the HMD arrows 20 degrees away from the user’s central gaze point. This placement ensured clear visibility in our target search and selection environment. The study used red coloring [3] for both HMD arrows and Ambient Display, as it best conveys urgency [28]. HMD arrows shook left and right at 4 Hz, while Ambient Display lights pulsed at the same frequency. This temporal frequency optimizes directional recognition accuracy [39]. Gaze and dwell time were implemented for confirmation to maintain simple target selection procedures.

3.1.2. Equipment and Software

This study employed Microsoft HoloLens 2 (Microsoft Corporation, Redmond, WA, USA) for the experimental setup, which features a transparent stereoscopic display and head-gaze tracking functions. The developed application was built on the Unity 2019.4 platform alongside Mixed Reality Toolkit (MRTK) version 2.6. The development and deployment took place on Windows 10 systems using Microsoft Visual Studio 2019 (version 16.9.4). This configuration provided high-quality visual rendering and interactivity, which enhanced the experimental accuracy. The LED strips for Ambient Display were controlled via a Makeblock Halocode (Makeblock Company Limited, Shenzhen, Guangdong, China) board. The directional information for the targets was published from the experimental application, employing Message Queuing Telemetry Transport (MQTT) protocol to guide user actions. The MQTT server was deployed on Tencent Cloud platform and operated through EMQX V4.0.4 within a CentOS 7 environment. The Ambient Display control application subscribed to these directional guidance messages and activated appropriate feedback responses when data arrived.

3.1.3. Independent Variables

The experiment had two user tasks, each with 3 independent variables. The independent variables for the main search task were as follows:
  • Visual Feedback Type: HMD Display, Ambient Display, and HMD-Ambient Display
  • Row Number: 1, 2, 3, 4, 5 (representing different vertical positions in the display grid)
  • Column Number: 1, 2, 3, 4, 5 (representing different horizontal positions in the display grid)
The row and column variables were designed to systematically examine target search performance across the entire frontal display matrix, allowing analysis of potential spatial effects in target selection. In rear visual area search, the following variables are employed:
  • Visual Feedback Type: HMD Display, Ambient Display, and HMD-Ambient Display
  • Row Number: 1, 2
  • Column Number: 1, 2, 3, 4, 5

3.1.4. Experimental Protocol

The study implemented a repeated measures within-subjects experimental design. Each participant completed three sessions that tested different visual feedback types: HMD Display, Ambient Display, and the combined HMD-Ambient Display. We counterbalanced the session order and provided two-minute rest periods between sessions. Each session required participants to complete all targets for both frontal visual area search and rear visual area search. Before each session, participants undertook five practice trials (three frontal visual area search, two rear visual area search) to understand the procedure. The formal trials followed, comprising 35 trials (25 frontal visual area search, 10 rear visual area search). Including both practice and formal trials, each participant completed 40 trials per session, resulting in 120 trials across the entire study. Figure 1a illustrates the experimental design.

3.1.5. Study Participants and Process

A total of 12 individuals (6 males and 6 females) were enrolled from the university campus for this research. The participants had a mean age of 23.3 years ( s d = 1.33 ) . Within the cohort, seven participants reported experience with three-dimensional (3D) gaming, three had exposure to virtual reality (VR) environments, and one had previous interaction with an augmented reality head-up display (AR-HUD) in vehicles.
During the experimental procedure, participants were comfortably seated in swivel chairs and instructed to select a designated target, specifically the letter “P”, by directing their gaze towards it using head movements. While all targets used the same gaze-and-hold selection mechanism, frontal targets appeared directly in view, whereas rear targets required directional cues due to their location outside the visual field. In accordance with protocols established by preceding research [12,40], the study employed letters as a means to represent targets. The target, the letter “P”, would manifest either directly in the participant’s field (as part of the frontal visual area search) or behind them (as part of the rear visual area search).
In this study, each letter appeared within a dark blue square button using white font. To facilitate selection confirmation, a “gaze-and-hold” technique was employed, whereby participants could confirm their choice by fixating on the target for one second. As the participant’s gaze targeted a button, its background shifted to a lighter blue shade, while a navy circular element emerged and expanded outward from the button’s central point, indicating the ongoing selection process. After one second, once the circle fully expanded, the target was considered selected.
For the frontal visual area search, participants needed to identify and choose the target letter “P” from 25 buttons arranged 1.35 m in front of them. Each button was 20 cm wide, with a 2.5 cm gap between adjacent buttons, spanning an overall area of 1.1 m in both dimensions. The rear visual area search involved selecting targets located behind the participant, where ten buttons were evenly distributed over a plane 1.35 m away. Each button in this arrangement was also 20 cm wide, spaced 42.5 cm apart, extending across 2.7 m in width and 82 cm in height. After randomly selecting 2 to 4 targets in the frontal visual area search, the letter “P” might be positioned within the rear visual area search behind the participant.
Under the HMD Display condition, as illustrated in Figure 1b, a red arrow indicates the direction of targets behind the participant. The arrow is 5 cm in both length and height, positioned 1.1 m away from the user at 20 degrees peripheral to the focal axis to prevent information clutter in the central visual field. It initially follows the participant’s head movements while shaking left and right with a lateral range of 20 cm using 4 Hz oscillations. In the Ambient Display condition, as shown in Figure 1b, two sets of RGB LED strips are mounted on the Microsoft HoloLens 2 to provide ambient feedback. The red-colored LEDs pulse at 4 Hz, producing flashes that remain active for 100 ms each. For both HMD and Ambient Displays, the feedback disappears when the gaze deviates more than 30 degrees from the frontal direction. Gaze and dwell time were implemented for confirmation to maintain simple target selection procedures.

3.2. Results

In this study, we recorded performance metrics, including selection time and individual behavior data, which encompassed information on head rotations and movements, to evaluate participant performance. Reaction times for both primary and rear visual area searches were also carefully recorded. Given that frontal targets are located within participants’ direct visual field while rear targets are positioned outside their field of view, we analyzed frontal and rear visual areas separately to evaluate the differential effects of visual cues on performance across these distinct spatial regions. A repeated measures Analysis of Variance (ANOVA) was conducted on the performance data, comprising selection time, reaction time, head rotations, and head movements, using factors such as target directionality, visual guidance continuity, rows, and columns. Since participants successfully selected all targets during the experiments, error rates were not analyzed.

3.2.1. Data in Frontal Visual Area Search

Selection Time
Selection time was calculated depending on the previous trial type. If the immediately preceding activity corresponded to the frontal visual area search presented to the participant (i.e., consecutive trials in the same search area type), we measured selection time from the moment the target (indicated by the letter “P”) was displayed until the selection was successfully completed. In cases where the previous task was a Rear Visual Area Search located behind the participant, adjustments were essential for measuring the starting point of the selection time due to the requisite head movement back to the frontal position. Considering the typical 60° diameter of the human central visual field [41], with all frontal visual area search targets positioned within this scope, the timing of selection commenced once the participant reoriented their head 30 degrees towards the front, continuing through to the task’s successful completion.
Significant main effects emerged for Row ( F 4 , 44 = 6.51 , p < 0.001 ) and Column ( F 4 , 44 = 7.61 , p < 0.001 ). No additional main or interaction effects reached significance. Figure 2a illustrates the mean selection time for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that row number 5 selection time (3.10 s) was significantly longer compared with row number 2 (2.60 s) ( p < 0.05 ) and row number 3 (2.63 s) ( p < 0.05 ). Column number 3 selection time (2.56 s) was significantly shorter compared with column number 1 (2.97 s) ( p < 0.05 ) and column number 4 (2.73 s) ( p < 0.05 ); additionally, column number 3 (2.56 s) was significantly shorter compared with column number 5 (3.06 s) ( p < 0.01 ).
Angle of Head Rotation
Total angle of head rotation was recorded throughout the selection time. Significant main effects emerged for Row ( F 4 , 44 = 2.88 , p < 0.05 ) and Column ( F 4 , 44 = 7.59 , p < 0.001 ). No additional main or interaction effects reached significance. Figure 2b illustrates the mean angle of head rotation for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that column number 5 angle of head rotation (83.12 degrees) was significantly larger compared with column number 2 (65.05 degrees) ( p < 0.05 ) and column number 3 (62.74 degrees) ( p < 0.05 ).
Distance of Head Movement
Total distance of head movement was recorded throughout the selection period. Significant main effects emerged for Row ( F 4 , 44 = 3.07 , p < 0.05 ) and Column ( F 4 , 44 = 6.91 , p < 0.001 ). Figure 2c illustrates the mean total distance of head movement for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that column number 5 distance (17.16 cm) was significantly larger than column number 2 (13.65 cm) ( p < 0.05 ) and column number 3 (13.39 cm) ( p < 0.05 ).
Distance of head movement along different dimensions was also recorded. Along the X-axis, a significant main effect emerged for Column ( F 4 , 44 = 7.47 , p < 0.001 ). Pairwise comparisons using Bonferroni correction indicated that column number 5 distance (9.56 cm) was significantly larger than column number 2 (7.40 cm) ( p < 0.05 ) and column number 3 (7.14 cm) ( p < 0.05 ). Along the Y-axis, significant main effects emerged for Row ( F 4 , 44 = 6.94 , p < 0.001 ) and Column ( F 4 , 44 = 5.47 , p < 0.01 ). Pairwise comparisons using Bonferroni correction indicated that row number 5 distance (7.17 cm) was significantly larger than row number 2 (5.25 cm) ( p < 0.05 ) and row number 3 (5.63 cm) ( p < 0.05 ). Additionally, column number 5 distance (6.78 cm) was significantly larger than column number 3 (5.38 cm) ( p < 0.01 ) and column number 4 (5.66 cm) ( p < 0.01 ). Along the Z-axis, a significant main effect emerged for Column ( F 4 , 44 = 3.74 , p < 0.05 ). Post hoc Bonferroni pairwise comparisons did not reveal significant effects for Row Number or Column Number.

3.2.2. Data in Rear Visual Area Search

Reaction Time
Data collection for the rear visual area search included measuring participants’ reaction times. Considering that the central visual area encompasses 60° in diameter, allowing detection of key contrasts, colors, and motions, as noted by Strasburger et al. [41], and given that all frontal visual area search targets fell within this visual range, the reaction times were assessed from the moment the directional arrow appeared to when participants turned their heads beyond 30 degrees from the forward-facing position. This represents the initial reaction initiation time, distinct from the complete selection process, which requires sustained gaze confirmation.
Significant main effects emerged for Feedback Modality ( F 2 , 22 = 5.60 , p < 0.05 ) and Row ( F 1 , 11 = 11.32 , p < 0.01 ). Figure 3a illustrates the mean reaction time for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that HMD display reaction time (5.29 s) was significantly slower than HMD-Ambient (4.37 s) ( p < 0.05 ). Row number 1 reaction time (4.98 s) was significantly slower than row number 2 (4.56 s) ( p < 0.01 ).
Selection Time
In the rear visual area search, selection times were measured starting when the participant’s head orientation deviated by more than 30 degrees from the forward position, ending upon successful task completion.
A significant main effect emerged for Column ( F 4 , 44 = 4.94 , p < 0.01 ). No additional main or interaction effects reached significance. Figure 3b illustrates the mean selection time for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that column number 3 selection time (3.66 s) was significantly slower than column number 2 (3.10 s) ( p < 0.01 ).
Angle of Head Rotation
Total angle of head rotation was recorded throughout the selection period. A significant main effect emerged for Column ( F 4 , 44 = 4.94 , p < 0.01 ). No additional main or interaction effects reached significance. Figure 3c illustrates the mean angle of head rotation for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that column number 3 angle of head rotation (175.61 degrees) was significantly larger than column number 1 (144.47 degrees) ( p < 0.05 ) and column number 2 (139.24 degrees) ( p < 0.001 ). Additionally, column number 3 (175.61 degrees) was significantly larger than column number 4 (149.35 degrees) ( p < 0.05 ) and column number 5 (142.74 degrees) ( p < 0.05 ).
Distance of Head Movement
Total distance of head movement was recorded throughout the selection period. A significant main effect emerged for Column ( F 1.91 , 21.00 = 5.58 , p < 0.05 ) (Due to sphericity assumption violations, Greenhouse–Geisser corrections were utilized; adjusted degrees of freedom are indicated.). Figure 3d illustrates the mean total distance of head movement for Feedback Modality, Row Number, and Column Number. Pairwise comparisons using Bonferroni correction indicated that column number 3 distance (31.16 cm) was significantly larger than column number 1 (26.87 cm) ( p < 0.05 ) and column number 2 (26.61 cm) ( p < 0.01 ). Additionally, column number 3 (31.16 cm) was significantly larger than column number 4 (25.74 cm) ( p < 0.05 ) and column number 5 (25.70 cm) ( p < 0.05 ).
Distance of head movement along different dimensions was also recorded. Along the X-axis, a significant main effect emerged for Column ( F 2.12 , 23.35 = 3.19 , p < 0.05 ) (Due to sphericity assumption violations, Greenhouse-Geisser corrections were utilized; adjusted degrees of freedom are indicated.). Pairwise comparisons using Bonferroni correction indicated that column number 3 distance (17.51 cm) was significantly larger than column number 2 (14.98 cm) ( p < 0.05 ). Along the Y-axis, a significant main effect emerged for Column ( F 4 , 44 = 14.55 , p < 0.001 ). Pairwise comparisons using Bonferroni correction indicated that column number 3 distance (9.28 cm) was significantly larger than column number 1 (6.66 cm) ( p < 0.001 ) and column number 2 (6.64 cm) ( p < 0.001 ). Additionally, column number 3 (9.28 cm) was significantly larger than column number 4 (7.09 cm) ( p < 0.01 ) and column number 5 (6.68 cm) ( p < 0.05 ). Along the Z-axis, a significant main effect emerged for Column number ( F 1.74 , 19.13 = 4.20 , p < 0.05 ) (Due to sphericity assumption violations, Greenhouse–Geisser corrections were utilized; adjusted degrees of freedom are indicated.). Pairwise comparisons using Bonferroni correction indicated that column number 3 distance (18.22 cm) was significantly larger than column number 1 (16.16 cm) ( p < 0.05 ) and column number 2 (16.35 cm) ( p < 0.05 ).

3.2.3. User Preference

Following the test, participants completed a questionnaire regarding their preferences and comments. The NASA-TLX workload assessment was also administered using uniform weighting across all six dimensions (Mental, Physical, Temporal, Performance, Effort, and Frustration). One-way ANOVA analysis of Feedback Modality demonstrated significant differences in user preference ( F 2 , 22 = 3.56 , p < 0.05 ). Post hoc LSD pairwise comparisons showed that HMD display preference rating (5.25) was significantly lower than HMD-Ambient (7.50) ( p < 0.05 ), as shown in Figure 4a. One-way ANOVA with repeated measurements across Feedback Modality demonstrated significant differences in effort ( F 2 , 22 = 3.64 , p < 0.05 ). Post hoc LSD pairwise comparisons showed that HMD display effort (40.41) was significantly higher than HMD-Ambient (21.67) ( p < 0.05 ). Main effects did not reach significance for TLX overall scores, Mental demand, Physical demand, Frustration levels, Performance ratings, and Temporal demand measures, as shown in Figure 4b.
In the post-experimental interviews, participants expressed various opinions with regard to the three display modalities. Seven participants (58%) reported that although the HMD Display could precisely indicate target locations, the cues provided were occasionally not easily noticeable. They found the Ambient Display to be more noticeable than the HMD Display. They appreciated that the Ambient Display did not obstruct their primary field of view. Moreover, under the HMD-Ambient Display condition, eight participants (67%) reported being initially drawn to the ambient lighting effects before noticing the arrow indicators, suggesting a hierarchy in cue salience.
Ten participants (83%) believed that the HMD-Ambient Display holds significant potential for application in the transportation sector. Among them, two participants specifically remarked, “The HMD-Ambient Display serves as an effective combined cue system. The change in ambient lighting immediately caught my attention when a target appeared, allowing me to perceive the direction of the target almost instinctively. Subsequently, the precise arrow indications quickly pinpointed the exact location of the target. This blend of cues enables intuitive and rapid target localization, making the process both efficient and effortless”. Eight of these participants also reported that the Ambient and HMD-Ambient Displays did not distract their attention from the frontal visual area search. They emphasized that the placement of lights at the periphery of vision is optimal and that the combination of these cues does not impose additional cognitive load.
Moreover, four participants (33%) recommended incorporating haptic feedback to enhance the overall experience. They pointed out that in complex environments requiring attention across multiple spatial regions, such as driving at night while following navigation instructions and monitoring road conditions, the visual channel is often restricted and overloaded. Overall, the interview results indicate strong participant preference for the combined HMD-Ambient Display, with participants appreciating both the immediate attention-capturing effect of ambient lighting and the precise directional guidance provided by HMD arrows.

4. Discussion

This research examines how an integrated peripheral vision and Head-Mounted Display system affects efficiency when searching across multiple spatial regions, with particular focus on target search tasks that require attention shifts. Through systematic assessment of complementary visual feedback methods, the study identifies strategies that enhance performance while maintaining cognitive load levels when attending to different spatial areas simultaneously. Results from the study offer insights into how integrated visual channels guide user attention and improve capabilities for directing attention across various spatial regions.

4.1. Advantages of Integrated HMD-Ambient Display System

We introduced an integrated visual approach that combines Ambient Display utilizing peripheral vision cues with traditional HMD technology to facilitate multi-region target search. This complementary visual approach represents a significant advancement from our previous work [12], where we focused solely on HMD visual cues to guide user attention in multitasking environments. The integrated HMD-Ambient Display system offers several advantages over single-modality solutions, particularly in terms of leveraging complementary visual processing pathways.
Our results demonstrate (see Table 1) that the integrated HMD-Ambient Display significantly outperformed the HMD Display alone in both user preference and reaction time during task switching. Participants reported that the integrated system was more noticeable and less intrusive, as it employed both peripheral and central visual channels without blocking the primary field of view. This dual-channel visual approach allowed users to maintain focus on their primary visual area while still being effectively alerted to rear visual area searches.
The effectiveness stems from the complementary nature of the two visual channels: peripheral stimuli excel at rapid attention capture due to their sensitivity to motion and luminance changes, while central HMD cues provide precise directional guidance for target localization [23]. Our findings suggest that the key to balancing information richness and cognitive load lies in employing functionally distinct visual channels—using minimalist peripheral cues solely for efficient attention capture while reserving precise guidance tasks for central visual processing, thereby preventing sensory overload through strategic division of cognitive demands. Through user interviews, we found that the integrated visual system created a natural two-stage process where flickering peripheral lights rapidly captured users’ attention, followed by central arrows that provided specific directional guidance. This complementary system optimizes visual processing pathways, enabling users to focus on their primary visual area while maintaining awareness of peripheral visual regions. This approach aligns with natural visual search mechanisms, which utilize peripheral vision for detecting potential targets and central vision for their verification [17].
In contrast to previous research [12], which relied solely on HMD-based visual cues for task-switching guidance, the integrated HMD-Ambient Display system represents a significant improvement by enabling multi-region attention shifts without occluding the central field of view. The current integrated visual approach overcomes limitations by combining ambient peripheral signals with targeted HMD guidance, creating a hybrid solution that reduces cognitive load while providing practical alternatives for various applications.

4.2. Real-World Deployment Considerations

While our findings show that the standalone Ambient Display did not demonstrate statistically significant performance differences compared with the HMD Display, the dramatic cost difference between these approaches makes the LED-based ambient system highly attractive for real-world applications. The cost-effectiveness of the ambient component presents compelling practical advantages that cannot be overlooked when considering widespread deployment.
HMD systems typically cost hundreds or thousands of dollars per unit, requiring sophisticated optical components, high-resolution displays, and complex processing hardware. In contrast, LED strips and basic control electronics represent a significant cost reduction compared with HMD alternatives. Thus, LED-based systems could be viable for mass deployment in consumer safety equipment where price sensitivity is critical, particularly for motorcycle and bicycle safety helmets, where manufacturers can integrate peripheral visual assistance without significantly increasing product costs.
Beyond cost considerations, LED-based ambient systems usually offer superior durability, longer battery life, and simpler maintenance compared with complex HMD systems. In harsh outdoor environments where motorcycles and bicycles operate, these reliability advantages are crucial. The ambient approach also avoids the visual obstruction and optical complexity that can make HMDs impractical for certain transportation applications where unobstructed vision is essential for safety. This cost-effective approach provides realistic pathways for translating research findings into commercially viable products that can achieve broad market penetration.

4.3. Practical Implications for Visual Assistive System Design

The insights gained from our studies have significant practical implications for designing accessible assistive systems that enhance efficiency when searching across multiple spatial areas. In contexts such as motorcycling, cycling, driving, or navigating complex environments, the ability to manage attention across different spatial regions while responding to critical information is essential for safety and performance.
Our findings highlight several key design principles for integrated visual feedback systems. The complementary nature of peripheral and central visual cues enables more effective attention management than either modality alone, suggesting that designers should prioritize integration of multiple visual channels rather than optimizing single modalities. The rapid detection capabilities of peripheral vision, combined with the precision of central visual guidance, create synergistic effects that enhance overall system performance. Previous studies examining peripheral visual cues [2] and arrow-based guidance [36] report similar performance improvements, though direct comparison remains challenging due to differences in task design and evaluation metrics [35]. Our controlled evaluation provides standardized benchmarks that can inform future comparative studies and support evidence-based design decisions.
System designers should consider the deployment environment when selecting appropriate technologies. For applications requiring widespread adoption, cost-effective solutions using LED-based peripheral displays may provide better overall impact than sophisticated but expensive alternatives. The integration approach should leverage existing infrastructure where possible, incorporating ambient visual elements into safety equipment that users already wear rather than requiring additional devices.
Reliability and durability considerations are paramount for real-world applications, particularly in transportation contexts where system failure could compromise safety. Simple, robust technologies often provide better long-term value than complex systems that may be prone to failure or require frequent maintenance. The Ambient Display approach demonstrates how effective assistance can be achieved through elegant simplicity rather than technological complexity.
The dual-channel visual design extends beyond current implementations to encompass broader visual integration opportunities. Future systems should explore combinations of different visual feedback mechanisms to create comprehensive assistance that accommodates diverse user preferences and environmental conditions. This approach provides efficient solutions for searching across multiple spatial areas while balancing innovation with practical deployment considerations.

5. Conclusions and Future Work

This study investigated how an integrated peripheral vision and Head-Mounted Display system enhances efficiency in target search tasks across multiple spatial regions. Our research demonstrates that combining Ambient Display technology with HMD systems significantly improves user performance by facilitating rapid attention shifts through complementary visual channels without increasing cognitive load. While standalone Ambient Displays using LED technology did not show statistically significant performance differences compared with HMDs, their substantial cost advantages make them highly attractive alternatives for real-world applications, particularly in safety-critical contexts such as motorcycle and bicycle helmets, where cost-effectiveness directly impacts adoption rates.
Future work should explore the application of this integrated visual system in more complex and realistic settings to validate and extend our findings. Research directions should include investigating adaptability across diverse user populations, including recruiting participants with broader age ranges and varied backgrounds to enhance generalizability, examining potential gender differences and individual differences in visual processing and peripheral vision capabilities across different demographic groups, and evaluating system performance under varying environmental conditions such as different lighting scenarios, motion contexts, distraction levels, and user fatigue states, examining performance in real-world transportation contexts beyond laboratory conditions, and evaluating long-term usability in harsh outdoor environments. Additionally, exploring the integration of this system with other sensory modalities will further enrich the design concepts for accessible assistive technologies.

Author Contributions

Conceptualization, G.W. and H.-H.W.; methodology, G.W.; software, Z.H.; validation, H.-H.W.; formal analysis, G.W. and Z.H.; investigation, G.W.; resources, G.W.; data curation, G.W.; writing—original draft preparation, G.W.; writing—review and editing, G.W. and Z.H.; visualization, G.W.; supervision, H.-H.W.; project administration, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of School of Design Arts, Xiamen University of Technology (Approval Number: XMUT-SDA-IRB-2024-05/011, Approval Date: 6 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data are contained within the manuscript. Raw data are available from the corresponding author upon request.

Acknowledgments

We appreciate all participants who took part in the studies.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented Reality
HMDHead-Mounted Display
HUDHead-Up Display
MRTKMixed Reality Toolkit

References

  1. Capallera, M.; Angelini, L.; Meteier, Q.; Khaled, O.A.; Mugellini, E. Human-Vehicle Interaction to Support Driver’s Situation Awareness in Automated Vehicles: A Systematic Review. IEEE Trans. Intell. Veh. 2023, 8, 2551–2567. [Google Scholar] [CrossRef]
  2. Matviienko, A.; Mehmedovic, D.; Müller, F.; Mühlhäuser, M. “Baby, You can Ride my Bike”: Exploring Maneuver Indications of Self-Driving Bicycles using a Tandem Simulator. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–21. [Google Scholar] [CrossRef]
  3. Tseng, H.Y.; Liang, R.H.; Chan, L.; Chen, B.Y. LEaD: Utilizing Light Movement as Peripheral Visual Guidance for Scooter Navigation. In Proceedings of the MobileHCI ’15: 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, Copenhagen, Denmark, 24–27 August 2015; ACM: New York, NY, USA, 2015; pp. 323–326, ISBN 9781450336529. [Google Scholar] [CrossRef]
  4. Van Veen, T.; Karjanto, J.; Terken, J. Situation Awareness in Automated Vehicles through Proximal Peripheral Light Signals. In Proceedings of the AutomotiveUI ’17: ACM 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 287–292, ISBN 9781450351508. [Google Scholar] [CrossRef]
  5. Bürger, D.; Schley, M.K.; Loerwald, H.; Pastel, S.; Witte, K. Comparative analysis of visual field characteristics and perceptual processing in peripheral vision between virtual reality and real world. Hum. Behav. Emerg. Technol. 2024, 2024, 2845190. [Google Scholar] [CrossRef]
  6. Poppinga, B.; Henze, N.; Fortmann, J.; Heuten, W.; Boll, S.C. AmbiGlasses—Information in the Periphery of the Visual Field. In Mensch & Computer 2012: Interaktiv Informiert—Allgegenwärtig und Allumfassend!? Oldenbourg Verlag: München, Germany, 2012. [Google Scholar]
  7. Chaturvedi, I.; Bijarbooneh, F.H.; Braud, T.; Hui, P. Peripheral vision: A new killer app for smart glasses. In Proceedings of the IUI ’19: 24th International Conference on Intelligent User Interfaces, Marina del Ray, Marina del Ray, CA, USA, 16–20 March 2019; ACM: New York, NY, USA, 2019; pp. 625–636, ISBN 9781450362726. [Google Scholar] [CrossRef]
  8. Warden, A.C.; Wickens, C.D.; Mifsud, D.; Ourada, S.; Clegg, B.A.; Ortega, F.R. Visual Search in Augmented Reality: Effect of Target Cue Type and Location. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Atlanta, GA, USA, 10–14 October 2022; SAGE Publications Inc.: Los Angeles, CA, USA, 2022; Volume 66, pp. 373–377. [Google Scholar] [CrossRef]
  9. Sutton, J.; Langlotz, T.; Plopski, A.; Zollmann, S.; Itoh, Y.; Regenbrecht, H. Look over there! Investigating saliency modulation for visual guidance with augmented reality glasses. In Proceedings of the UIST ’22: 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; ACM: New York, NY, USA, 2022; pp. 1–15, ISBN 9781450393201. [Google Scholar] [CrossRef]
  10. Albrecht, M.; Assländer, L.; Reiterer, H.; Streuber, S. MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research. In Proceedings of the 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), Shanghai, China, 25–29 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 691–701, ISBN 9798350348156. [Google Scholar] [CrossRef]
  11. Meschtscherjakov, A.; Döttlinger, C.; Kaiser, T.; Tscheligi, M. Chase Lights in the Peripheral View: How the Design of Moving Patterns on an LED Strip Influences the Perception of Speed in an Automotive Context. In Proceedings of the CHI ’20: 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2022; pp. 1–9, ISBN 9781450367080. [Google Scholar] [CrossRef]
  12. Wang, G.; Wang, H.H.; Ren, G. Visual and Haptic Guidance for Enhancing Target Search Performance in Dual-Task Settings. Appl. Sci. 2024, 14, 4650. [Google Scholar] [CrossRef]
  13. Stein, N.; Watson, T.; Lappe, M.; Westendorf, M.; Durant, S. Eye and head movements in visual search in the extended field of view. Sci. Rep. 2024, 14, 8907. [Google Scholar] [CrossRef] [PubMed]
  14. Postuma, E.M.J.L.; Cornelissen, F.W.; Pahlevan, M.; Heutink, J.; De Haan, G.A. Reduced Field of View Alters Scanning Behaviour. Virtual Real. 2025, 29, 55. [Google Scholar] [CrossRef] [PubMed]
  15. Wolfe, J.M. Visual Search: How Do We Find What We Are Looking For? Annu. Rev. Vis. Sci. 2020, 6, 539–562. [Google Scholar] [CrossRef] [PubMed]
  16. Botch, T.L.; Garcia, B.D.; Choi, Y.B.; Feffer, N.; Robertson, C.E. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci. Rep. 2023, 13, 631. [Google Scholar] [CrossRef] [PubMed]
  17. David, E.J.; Beitner, J.; Võ, M.L.H. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J. Vis. 2021, 21, 3. [Google Scholar] [CrossRef] [PubMed]
  18. Vater, C.; Wolfe, B.; Rosenholtz, R. Peripheral vision in real-world tasks: A systematic review. Psychon. Bull. Rev. 2022, 29, 1531–1557. [Google Scholar] [CrossRef]
  19. Nuthmann, A.; Canas-Bajo, T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J. Vis. 2022, 22, 10. [Google Scholar] [CrossRef]
  20. Eckstein, M.P. Visual search: A retrospective. J. Vis. 2011, 11, 14. [Google Scholar] [CrossRef] [PubMed]
  21. Gruenefeld, U.; Stratmann, T.C.; Jung, J.; Lee, H.; Choi, J.; Nanda, A.; Heuten, W. Guiding Smombies: Augmenting Peripheral Vision with Low-Cost Glasses to Shift the Attention of Smartphone Users. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; ACM: New York, NY, USA, 2018; pp. 127–131, ISBN 9781538675922. [Google Scholar] [CrossRef]
  22. Rosenholtz, R. Capabilities and Limitations of Peripheral Vision. Annu. Rev. Vis. Sci. 2016, 2, 437–457. [Google Scholar] [CrossRef] [PubMed]
  23. Wilbrink, M.; Kelsch, J.; Schieben, A. Ambient light based interaction concept for an integrative driver assistance system—A driving simulator study. In Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2015 Annual Conference, Groningen, The Netherlands, 14–16 October 2015. [Google Scholar]
  24. Healey, C.G.; Enns, J.T. Attention and Visual Memory in Visualization and Computer Graphics. IEEE Trans. Vis. Comput. Graph. 2012, 18, 1170–1188. [Google Scholar] [CrossRef] [PubMed]
  25. Waldner, M.; Le Muzic, M.; Bernhard, M.; Purgathofer, W.; Viola, I. Attractive Flicker — Guiding Attention in Dynamic Narrative Visualizations. IEEE Trans. Vis. Comput. Graph. 2014, 20, 2456–2465. [Google Scholar] [CrossRef] [PubMed]
  26. Leung, J.; Cockburn, A. Design Framework for Interactive Highlighting Techniques. Found. Trends® Human-Comput. Interact. 2021, 14, 96–271. [Google Scholar] [CrossRef]
  27. Waldin, N.; Waldner, M.; Viola, I. Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images. Comput. Graph. Forum 2017, 36, 467–476. [Google Scholar] [CrossRef]
  28. Cobus, V.; Meyer, H.; Ananthanarayan, S.; Boll, S.; Heuten, W. Towards reducing alarm fatigue: Peripheral light pattern design for critical care alarms. In Proceedings of the NordiCHI’18: 10th Nordic Conference on Human-Computer Interaction, Oslo, Norway, 29 September–3 October 2018; ACM: New York, NY, USA, 2018; pp. 654–663, ISBN 9781450364379. [Google Scholar] [CrossRef]
  29. Matviienko, A.; Löcken, A.; El Ali, A.; Heuten, W.; Boll, S. NaviLight: Investigating ambient light displays for turn-by-turn navigation in cars. In Proceedings of the MobileHCI ’16: 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; ACM: New York, NY, USA, 2018; pp. 283–294, ISBN 9781450344081. [Google Scholar] [CrossRef]
  30. Matviienko, A.; Ananthanarayan, S.; Brewster, S.; Heuten, W.; Boll, S. Comparing unimodal lane keeping cues for child cyclists. In Proceedings of the MUM 2019: 18th International Conference on Mobile and Ubiquitous Multimedia, Pisa, Italy, 27–29 November 2019; ACM: New York, NY, USA, 2019; pp. 1–11, ISBN 9781450376242. [Google Scholar] [CrossRef]
  31. Niforatos, E.; Fedosov, A.; Elhart, I.; Langheinrich, M. Augmenting skiers’ peripheral perception. In Proceedings of the UbiComp ’17: 2017 ACM International Symposium on Wearable Computers, Maui, HI, USA, 11–15 September 2017; ACM: New York, NY, USA, 2017; pp. 114–121, ISBN 9781450351881. [Google Scholar] [CrossRef]
  32. Kiss, F.; Woźniak, P.W.; Scheerer, F.; Dominiak, J.; Romanowski, A.; Schmidt, A. Clairbuoyance: Improving Directional Perception for Swimmers. In Proceedings of the CHI ’19: 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019; pp. 1–12, ISBN 9781450359702. [Google Scholar] [CrossRef]
  33. Firouzian, A.; Kashimoto, Y.; Yamamoto, G.; Keranen, N.; Asghar, Z.; Pulli, P. Twinkle Megane: Evaluation of Near-Eye LED Indicators on Glasses for Simple and Smart Navigation in Daily Life. EAI Endorsed Trans. Pervasive Health Technol. 2017, 3, 153068. [Google Scholar] [CrossRef]
  34. Woodworth, J.W.; Yoshimura, A.; Lipari, N.G.; Borst, C.W. Design and Evaluation of Visual Cues for Restoring and Guiding Visual Attention in Eye-Tracked VR. In Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 25–29 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 442–450, ISBN 9798350348392. [Google Scholar] [CrossRef]
  35. Schinke, T.; Henze, N.; Boll, S. Visualization of off-screen objects in mobile augmented reality. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, Lisbon, Portugal, 7–10 September 2010; pp. 313–316. [Google Scholar] [CrossRef]
  36. Kumaran, R.; Kim, Y.J.; Milner, A.E.; Bullock, T.; Giesbrecht, B.; Höllerer, T. The Impact of Navigation Aids on Search Performance and Object Recall in Wide-Area Augmented Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–17, ISBN 9781450346559. [Google Scholar] [CrossRef]
  37. Matviienko, A.; Ananthanarayan, S.; El Ali, A.; Heuten, W.; Boll, S. NaviBike: Comparing Unimodal Navigation Cues for Child Cyclists. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–12. [Google Scholar] [CrossRef]
  38. Gutwin, C.; Cockburn, A.; Coveney, A. Peripheral Popout: The Influence of Visual Angle and Stimulus Intensity on Popout Effects. In Proceedings of the CHI ’17: 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; ACM: New York, NY, USA, 2017; pp. 208–219, ISBN 9781450346559. [Google Scholar] [CrossRef]
  39. Ronchi, E.; Nilsson, D.; Kojić, S.; Eriksson, J.; Lovreglio, R.; Modig, H.; Walter, A.L. A virtual reality experiment on flashing lights at emergency exit portals for road tunnel evacuation. Fire Technol. 2016, 52, 623–647. [Google Scholar] [CrossRef]
  40. Lehtinen, V.; Oulasvirta, A.; Salovaara, A.; Nurmi, P. Dynamic tactile guidance for visual search tasks. In Proceedings of the UIST ’12: 25th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 7–10 October 2012; pp. 445–452. [Google Scholar] [CrossRef]
  41. Strasburger, H.; Rentschler, I.; Juttner, M. Peripheral vision and pattern recognition: A review. J. Vis. 2011, 11, 13. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experimental environment and visual cue design: (a) Experimental environment. (b) Visual cue design.
Figure 1. Experimental environment and visual cue design: (a) Experimental environment. (b) Visual cue design.
Information 16 00800 g001
Figure 2. Mean selection time, angle of head rotation, distance of head movement; results of frontal visual area search: (a) Mean selection time. (b) Mean angle of head rotation. (c) Mean distance of head movement. (d) Mean distance of head movement along the X-axis. (e) Mean distance of head movement along the Y-axis. (f) Mean distance of head movement along the Z-axis.
Figure 2. Mean selection time, angle of head rotation, distance of head movement; results of frontal visual area search: (a) Mean selection time. (b) Mean angle of head rotation. (c) Mean distance of head movement. (d) Mean distance of head movement along the X-axis. (e) Mean distance of head movement along the Y-axis. (f) Mean distance of head movement along the Z-axis.
Information 16 00800 g002
Figure 3. Mean reaction time, selection time, angle of head rotation, distance of head movement; results of rear visual area search: (a) Mean reaction time. (b) Mean selection time. (c) Mean angle of head rotation. (d) Mean distance of head movement.
Figure 3. Mean reaction time, selection time, angle of head rotation, distance of head movement; results of rear visual area search: (a) Mean reaction time. (b) Mean selection time. (c) Mean angle of head rotation. (d) Mean distance of head movement.
Information 16 00800 g003
Figure 4. Mean preference ratings and NASA TLX assessment: (a) Mean preference of visual feedback type. (b) Mean NASA TLX of visual feedback type.
Figure 4. Mean preference ratings and NASA TLX assessment: (a) Mean preference of visual feedback type. (b) Mean NASA TLX of visual feedback type.
Information 16 00800 g004
Table 1. Performance Comparison between HMD and HMD-Ambient Display.
Table 1. Performance Comparison between HMD and HMD-Ambient Display.
HMD DisplayHMD-Ambient DisplayImprovement (%)
Reaction Time (Rear Visual Area)5.29 s4.37 s17.39%
User Preference5.257.5042.86%
Effort40.4121.6746.37%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, G.; Wang, H.-H.; Huang, Z. Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems. Information 2025, 16, 800. https://doi.org/10.3390/info16090800

AMA Style

Wang G, Wang H-H, Huang Z. Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems. Information. 2025; 16(9):800. https://doi.org/10.3390/info16090800

Chicago/Turabian Style

Wang, Gang, Hung-Hsiang Wang, and Zhihuang Huang. 2025. "Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems" Information 16, no. 9: 800. https://doi.org/10.3390/info16090800

APA Style

Wang, G., Wang, H.-H., & Huang, Z. (2025). Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems. Information, 16(9), 800. https://doi.org/10.3390/info16090800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop