Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = auditory situational awareness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2229 KiB  
Article
Assessing the Impact of Risk-Warning eHMI Information Content on Pedestrian Mental Workload, Situation Awareness, and Gap Acceptance in Full and Partial eHMI Penetration Vehicle Platoons
by Fang Yang, Xu Sun, Jiming Bai, Bingjian Liu, Luis Felipe Moreno Leyva and Sheng Zhang
Appl. Sci. 2025, 15(15), 8250; https://doi.org/10.3390/app15158250 - 24 Jul 2025
Viewed by 220
Abstract
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level [...] Read more.
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level (low-medium-high) configuration to assess their impact on pedestrians’ crossing decisions, mental workload (MW), and situation awareness (SA) in vehicle platoon scenarios under full and partial eHMI penetration. In a video-based experiment with 24 participants, crossing decisions were evaluated via temporal gap selection, MW via P300 event-related potentials in an auditory oddball task, and SA via the Situation Awareness Rating Technique. The three-level configuration outperformed single-level medium, single-level high, two-level low-medium, and two-level medium-high in gap acceptance, promoting safer decisions by rejecting smaller gaps and accepting larger ones, and exhibited lower MW than the two-level medium-high configuration under partial penetration. No SA differences were observed. Although the three-level configuration was generally appreciated, future research should optimize presentation to mitigate issues from rapid signal changes. Notably, the single-level low configuration showed comparable performance, suggesting a simpler alternative for real-world eHMI deployment. Full article
Show Figures

Figure 1

20 pages, 3003 KiB  
Article
Equipment Sounds’ Event Localization and Detection Using Synthetic Multi-Channel Audio Signal to Support Collision Hazard Prevention
by Kehinde Elelu, Tuyen Le and Chau Le
Buildings 2024, 14(11), 3347; https://doi.org/10.3390/buildings14113347 - 23 Oct 2024
Viewed by 1234
Abstract
Construction workplaces often face unforeseen collision hazards due to a decline in auditory situational awareness among on-foot workers, leading to severe injuries and fatalities. Previous studies that used auditory signals to prevent collision hazards focused on employing a classical beamforming approach to determine [...] Read more.
Construction workplaces often face unforeseen collision hazards due to a decline in auditory situational awareness among on-foot workers, leading to severe injuries and fatalities. Previous studies that used auditory signals to prevent collision hazards focused on employing a classical beamforming approach to determine equipment sounds’ Direction of Arrival (DOA). No existing frameworks implement a neural network-based approach for both equipment sound classification and localization. This paper presents an innovative framework for sound classification and localization using multichannel sound datasets artificially synthesized in a virtual three-dimensional space. The simulation synthesized 10,000 multi-channel datasets using just fourteen single sound source audiotapes. This training includes a two-staged convolutional recurrent neural network (CRNN), where the first stage learns multi-label sound event classes followed by the second stage to estimate their DOA. The proposed framework achieves a low average DOA error of 30 degrees and a high F-score of 0.98, demonstrating accurate localization and classification of equipment near workers’ positions on the site. Full article
(This article belongs to the Special Issue Big Data Technologies in Construction Management)
Show Figures

Figure 1

19 pages, 12202 KiB  
Article
Does Cognitive Load Affect Measures of Consciousness?
by André Sevenius Nilsen, Johan Frederik Storm and Bjørn Erik Juel
Brain Sci. 2024, 14(9), 919; https://doi.org/10.3390/brainsci14090919 - 13 Sep 2024
Viewed by 1462
Abstract
Background: Developing and testing methods for reliably measuring the state of consciousness of individuals is important for both basic research and clinical purposes. In recent years, several promising measures of consciousness, grounded in theoretical developments, have been proposed. However, the degrees to which [...] Read more.
Background: Developing and testing methods for reliably measuring the state of consciousness of individuals is important for both basic research and clinical purposes. In recent years, several promising measures of consciousness, grounded in theoretical developments, have been proposed. However, the degrees to which these measures are affected by changes in brain activity that are not related to changes in the degree of consciousness has not been well tested. In this study, we examined whether several of these measures are modulated by the loading of cognitive resources. Methods: We recorded electroencephalography (EEG) from 12 participants in two conditions: (1) while passively attending to sensory stimuli related to the measures and (2) during increased cognitive load consisting of a demanding working memory task. We investigated whether a set of proposed objective EEG-based measures of consciousness differed between the passive and the cognitively demanding conditions. Results: The P300b event-related potential (sensitive to conscious awareness of deviance from an expected pattern in auditory stimuli) was significantly affected by concurrent performance on a working memory task, whereas various measures based on signal diversity of spontaneous and perturbed EEG were not. Conclusion: Because signal diversity-based measures of spontaneous or perturbed EEG are not sensitive to the degree of cognitive load, we suggest that these measures may be used in clinical situations where attention, sensory processing, or command following might be impaired. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

24 pages, 72562 KiB  
Article
Enhancing Safety in Autonomous Vehicles: The Impact of Auditory and Visual Warning Signals on Driver Behavior and Situational Awareness
by Ann Huang, Shadi Derakhshan, John Madrid-Carvajal, Farbod Nosrat Nezami, Maximilian Alexander Wächter, Gordon Pipa and Peter König
Vehicles 2024, 6(3), 1613-1636; https://doi.org/10.3390/vehicles6030076 - 8 Sep 2024
Cited by 3 | Viewed by 3557
Abstract
Semi-autonomous vehicles (AVs) enable drivers to engage in non-driving tasks but require them to be ready to take control during critical situations. This “out-of-the-loop” problem demands a quick transition to active information processing, raising safety concerns and anxiety. Multimodal signals in AVs aim [...] Read more.
Semi-autonomous vehicles (AVs) enable drivers to engage in non-driving tasks but require them to be ready to take control during critical situations. This “out-of-the-loop” problem demands a quick transition to active information processing, raising safety concerns and anxiety. Multimodal signals in AVs aim to deliver take-over requests and facilitate driver–vehicle cooperation. However, the effectiveness of auditory, visual, or combined signals in improving situational awareness and reaction time for safe maneuvering remains unclear. This study investigates how signal modalities affect drivers’ behavior using virtual reality (VR). We measured drivers’ reaction times from signal onset to take-over response and gaze dwell time for situational awareness across twelve critical events. Furthermore, we assessed self-reported anxiety and trust levels using the Autonomous Vehicle Acceptance Model questionnaire. The results showed that visual signals significantly reduced reaction times, whereas auditory signals did not. Additionally, any warning signal, together with seeing driving hazards, increased successful maneuvering. The analysis of gaze dwell time on driving hazards revealed that audio and visual signals improved situational awareness. Lastly, warning signals reduced anxiety and increased trust. These results highlight the distinct effectiveness of signal modalities in improving driver reaction times, situational awareness, and perceived safety, mitigating the “out-of-the-loop” problem and fostering human–vehicle cooperation. Full article
(This article belongs to the Topic Vehicle Safety and Automated Driving)
Show Figures

Figure 1

17 pages, 669 KiB  
Article
Persona-PhysioSync AV: Personalized Interaction through Personality and Physiology Monitoring in Autonomous Vehicles
by Jonathan Giron, Yaron Sela, Leonid Barenboim, Gail Gilboa-Freedman and Yair Amichai-Hamburger
Sensors 2024, 24(6), 1977; https://doi.org/10.3390/s24061977 - 20 Mar 2024
Cited by 4 | Viewed by 2233
Abstract
The emergence of autonomous vehicles (AVs) marks a transformative leap in transportation technology. Central to the success of AVs is ensuring user safety, but this endeavor is accompanied by the challenge of establishing trust and acceptance of this novel technology. The traditional “one [...] Read more.
The emergence of autonomous vehicles (AVs) marks a transformative leap in transportation technology. Central to the success of AVs is ensuring user safety, but this endeavor is accompanied by the challenge of establishing trust and acceptance of this novel technology. The traditional “one size fits all” approach to AVs may limit their broader societal, economic, and cultural impact. Here, we introduce the Persona-PhysioSync AV (PPS-AV). It adopts a comprehensive approach by combining personality traits with physiological and emotional indicators to personalize the AV experience to enhance trust and comfort. A significant aspect of the PPS-AV framework is its real-time monitoring of passenger engagement and comfort levels within AVs. It considers a passenger’s personality traits and their interaction with physiological and emotional responses. The framework can alert passengers when their engagement drops to critical levels or when they exhibit low situational awareness, ensuring they regain attentiveness promptly, especially during Take-Over Request (TOR) events. This approach fosters a heightened sense of Human–Vehicle Interaction (HVI), thereby building trust in AV technology. While the PPS-AV framework currently provides a foundational level of state diagnosis, future developments are expected to include interaction protocols that utilize interfaces like haptic alerts, visual cues, and auditory signals. In summary, the PPS-AV framework is a pivotal tool for the future of autonomous transportation. By prioritizing safety, comfort, and trust, it aims to make AVs not just a mode of transport but a personalized and trusted experience for passengers, accelerating the adoption and societal integration of autonomous vehicles. Full article
Show Figures

Figure 1

29 pages, 4887 KiB  
Review
Augmented Hearing of Auditory Safety Cues for Construction Workers: A Systematic Literature Review
by Khang Dang, Kehinde Elelu, Tuyen Le and Chau Le
Sensors 2022, 22(23), 9135; https://doi.org/10.3390/s22239135 - 24 Nov 2022
Cited by 9 | Viewed by 3555 | Correction
Abstract
Safety-critical sounds at job sites play an essential role in construction safety, but hearing capability is often declined due to the use of hearing protection and the complicated nature of construction noise. Thus, preserving or augmenting the auditory situational awareness of construction workers [...] Read more.
Safety-critical sounds at job sites play an essential role in construction safety, but hearing capability is often declined due to the use of hearing protection and the complicated nature of construction noise. Thus, preserving or augmenting the auditory situational awareness of construction workers has become a critical need. To enable further advances in this area, it is necessary to synthesize the state-of-the-art auditory signal processing techniques and their implications for auditory situational awareness (ASA) and to identify future research needs. This paper presents a critical review of recent publications on acoustic signal processing techniques and suggests research gaps that merit further research for fully embracing construction workers’ ASA of hazardous situations in construction. The results from the content analysis show that research on ASA in the context of construction safety is still in its early stage, with inadequate AI-based sound sensing methods available. Little research has been undertaken to augment individual construction workers in recognizing important signals that may be blocked or mixed with complex ambient noise. Further research on auditory situational awareness technology is needed to support detecting and separating important acoustic safety cues from complex ambient sounds. More work is also needed to incorporate context information into sound-based hazard detection and to investigate human factors affecting the collaboration between workers and AI assistants in sensing the safety cues of hazards. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing Based Acoustic Sensors)
Show Figures

Figure 1

24 pages, 5115 KiB  
Article
“Attention! A Door Could Open.”—Introducing Awareness Messages for Cyclists to Safely Evade Potential Hazards
by Tamara von Sawitzky, Thomas Grauschopf and Andreas Riener
Multimodal Technol. Interact. 2022, 6(1), 3; https://doi.org/10.3390/mti6010003 - 31 Dec 2021
Cited by 31 | Viewed by 5703
Abstract
Numerous statistics show that cyclists are often involved in road traffic accidents, often with serious outcomes. One potential hazard of cycling, especially in cities, is “dooring”—passing parked vehicles that still have occupants inside. These occupants could open the vehicle door unexpectedly in the [...] Read more.
Numerous statistics show that cyclists are often involved in road traffic accidents, often with serious outcomes. One potential hazard of cycling, especially in cities, is “dooring”—passing parked vehicles that still have occupants inside. These occupants could open the vehicle door unexpectedly in the cyclist’s path—requiring a quick evasive response by the cyclist to avoid a collision. Dooring can be very poorly anticipated; as a possible solution, we propose in this work a system that notifies the cyclist of opening doors based on a networked intelligent transportation infrastructure. In a user study with a bicycle simulator (N = 24), we examined the effects of three user interface designs compared to a baseline (no notifications) on cycling behavior (speed and lateral position), perceived safety, and ease of use. Awareness messages (either visual message, visual message + auditory icon, or visual + voice message) were displayed on a smart bicycle helmet at different times before passing a parked, still-occupied vehicle. Our participants found the notifications of potential hazards very easy to understand and appealing and felt that the alerts could help them navigate traffic more safely. Those concepts that (additionally) used auditory icons or voice messages were preferred. In addition, the lateral distance increased significantly when a potentially opening door was indicated. In these situations, cyclists were able to safely pass the parked vehicle without braking. In summary, we are convinced that notification systems, such as the one presented here, are an important component for increasing road safety, especially for vulnerable road users. Full article
(This article belongs to the Special Issue User Interfaces for Cyclists)
Show Figures

Graphical abstract

19 pages, 2800 KiB  
Article
Multimodal Warnings in Remote Operation: The Case Study on Remote Driving
by Pekka Kallioniemi, Alisa Burova, John Mäkelä, Tuuli Keskinen, Kimmo Ronkainen, Ville Mäkelä, Jaakko Hakulinen and Markku Turunen
Multimodal Technol. Interact. 2021, 5(8), 44; https://doi.org/10.3390/mti5080044 - 12 Aug 2021
Cited by 6 | Viewed by 4565
Abstract
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation [...] Read more.
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation is the lack of proper sensory feedback from the machine, which in turn negatively affects situational awareness and, consequently, may risk remote operations. This article explores how to improve situational awareness via multimodal feedback (visual, auditory, and haptic) and studies how it can be utilized to communicate warnings to remote operators. To reach our goals, we conducted a controlled, within-subjects experiment in eight conditions with twenty-four participants on a simulated remote driving system. Additionally, we gathered further insights with a UX questionnaire and semi-structured interviews. Gathered data showed that the use of multimodal feedback positively affected situational awareness when driving remotely. Our findings indicate that the combination of added haptic and visual feedback was considered the best feedback combination to communicate the slipperiness of the road. We also found that the feeling of presence is an important aspect of remote driving tasks, and a requested one, especially by those with more experience in operating real heavy machinery. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 575 KiB  
Review
Effects of User Interfaces on Take-Over Performance: A Review of the Empirical Evidence
by Soyeon Kim, René van Egmond and Riender Happee
Information 2021, 12(4), 162; https://doi.org/10.3390/info12040162 - 10 Apr 2021
Cited by 34 | Viewed by 5052
Abstract
In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user [...] Read more.
In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios. Full article
Show Figures

Figure 1

34 pages, 9260 KiB  
Article
A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions
by Eunjeong Ko and Eun Yi Kim
Sensors 2017, 17(8), 1882; https://doi.org/10.3390/s17081882 - 16 Aug 2017
Cited by 38 | Viewed by 8665
Abstract
A significant challenge faced by visually impaired people is ‘wayfinding’, which is the ability to find one’s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects [...] Read more.
A significant challenge faced by visually impaired people is ‘wayfinding’, which is the ability to find one’s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques. It estimates the motions of the user with two inertial sensors and records the trajectories of the user toward the destination, which are also used as a guide for the return route after reaching the destination. To efficiently convey the recognized results using an auditory interface, activity-based instructions are generated that guide the user in a series of movements along a route. To assess the effectiveness of the proposed system, experiments were conducted in several indoor environments: the sit in which the situation awareness accuracy was 90% and the object detection false alarm rate was 0.016. In addition, our field test results demonstrate that users can locate their paths with an accuracy of 97%. Full article
(This article belongs to the Special Issue Smartphone-based Pedestrian Localization and Navigation)
Show Figures

Figure 1

Back to TopTop