Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = fatigue driving recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 4405 KiB  
Article
Pupil Detection Algorithm Based on ViM
by Yu Zhang, Changyuan Wang, Pengbo Wang and Pengxiang Xue
Sensors 2025, 25(13), 3978; https://doi.org/10.3390/s25133978 - 26 Jun 2025
Viewed by 323
Abstract
Pupil detection is a key technology in fields such as human–computer interaction, fatigue driving detection, and medical diagnosis. Existing pupil detection algorithms still face challenges in maintaining robustness under variable lighting conditions and occlusion scenarios. In this paper, we propose a novel pupil [...] Read more.
Pupil detection is a key technology in fields such as human–computer interaction, fatigue driving detection, and medical diagnosis. Existing pupil detection algorithms still face challenges in maintaining robustness under variable lighting conditions and occlusion scenarios. In this paper, we propose a novel pupil detection algorithm, ViMSA, based on the ViM model. This algorithm introduces weighted feature fusion, aiming to enable the model to adaptively learn the contribution of different feature patches to the pupil detection results; combines ViM with the MSA (multi-head self-attention) mechanism), aiming to integrate global features and improve the accuracy and robustness of pupil detection; and uses FFT (Fast Fourier Transform) to convert the time-domain vector outer product in MSA into a frequency–domain dot product, in order to reduce the computational complexity of the model and improve the detection efficiency of the model. ViMSA was trained and tested on nearly 135,000 pupil images from 30 different datasets, demonstrating exceptional generalization capability. The experimental results demonstrate that the proposed ViMSA achieves 99.6% detection accuracy at five pixels with an RMSE of 1.67 pixels and a processing speed exceeding 100 FPS, meeting real-time monitoring requirements for various applications including operation under variable and uneven lighting conditions, assistive technology (enabling communication with neuro-motor disorder patients through pupil recognition), computer gaming, and automotive industry applications (enhancing traffic safety by monitoring drivers’ cognitive states). Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 6365 KiB  
Article
Broken Wire Detection Based on TDFWNet and Its Application in the FAST Project
by Wanxu Zhu, Zixu Zhong, Sha Cheng, Qingwei Li, Rui Yao and Hui Li
Electronics 2025, 14(13), 2544; https://doi.org/10.3390/electronics14132544 - 24 Jun 2025
Viewed by 237
Abstract
This research proposes a wire-breakage detection method based on a Time-Domain Feature Weighted Network (TDFWNet) to address the challenging issue of wire-breakage detection in the feed source cabin drive cables of the Five-hundred-meter Aperture Spherical radio Telescope (FAST). The study begins with a [...] Read more.
This research proposes a wire-breakage detection method based on a Time-Domain Feature Weighted Network (TDFWNet) to address the challenging issue of wire-breakage detection in the feed source cabin drive cables of the Five-hundred-meter Aperture Spherical radio Telescope (FAST). The study begins with a temporal domain morphology analysis, revealing significant differences between wire-breakage signals and interference signals in key characteristic parameters such as waveform factor, pulse factor, and kurtosis. These parameters are thus employed as the basis for feature input, and their corresponding feature probabilities are calculated to provide prior feature weights for the model. The TDFWNet model integrates the feature learning capability of a Convolutional Neural Network (CNN) with temporal domain feature analysis using the feature probabilities derived from key temporal domain characteristic parameters as weight inputs to enhance the sensitivity and recognition accuracy of wire-breakage signals. Furthermore, the research team has developed a data augmentation method based on Feature-Constrained Dynamic Time Warping (FCDTW). This method processes the original wire-breakage signals to generate high-quality augmented data, thereby improving the model’s ability to recognize wire-breakage signals. Ultimately, the TDFWNet outperforms traditional CNN models by 1.5%, 2.0%, 1.8%, and 16.6% in precision, recall, F1 score, and accuracy, respectively. In practical engineering applications, this method demonstrated excellent stability and practicality in three domestic FAST drive cable-bending fatigue tests. The detected suspected wire-breakage signals were highly consistent with the results of post-fatigue test disassembly inspections, effectively supporting the wire-breakage detection requirements in actual engineering scenarios. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2535 KiB  
Article
Can Anthropomorphic Interfaces Improve the Ergonomics and Safety Performance of Human–Machine Collaboration in Multitasking Scenarios?—An Example of Human–Machine Co-Driving in High-Speed Trains
by Yunan Jiang and Jinyi Zhi
Biomimetics 2025, 10(5), 307; https://doi.org/10.3390/biomimetics10050307 - 11 May 2025
Viewed by 474
Abstract
High-speed trains are some of the most important transportation vehicles requiring human–computer collaboration. This study investigated the effects of different types of icons on recognition performance and cognitive load during frequent observation and sudden takeover tasks in high-speed trains. The results of this [...] Read more.
High-speed trains are some of the most important transportation vehicles requiring human–computer collaboration. This study investigated the effects of different types of icons on recognition performance and cognitive load during frequent observation and sudden takeover tasks in high-speed trains. The results of this study can be used to improve the efficiency of human–computer collaboration tasks and driving safety. In this study, 48 participants were selected for a simulated driving experiment on a high-speed train. The recognition reaction time, operation completion time, number of recognition errors, number of operation errors, SUS scale, and NASA-TLX questionnaire for the icons were all analyzed using analysis of variance (ANOVA) and the nonparametric Mann–Whitney U test. The results show that anthropomorphic icons can reduce the drivers’ visual fatigue and mental load in frequent observation tasks due to the anthropomorphic facial features attracting driver attention through simple lines and improving visual search efficiency. However, for the sudden takeover human–computer collaboration task, the facial features of the anthropomorphic icons were not recognized in a short period of time. Additionally, due to the positive emotions produced by the facial features, the drivers did not perceive the suddenness and danger of the sudden takeover human–computer collaboration task, resulting in the traditional icons being more capable of arousing the drivers’ alertness and helping them take over the task quickly. At the same time, neither type of icon triggered misrecognition or operation for sufficiently skilled drivers. These research results can provide guidance for the design of icons in human–computer collaborative interfaces for different types of driving tasks in high-speed trains, which can help improve the recognition speed, reaction speed, and safety of drivers. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

29 pages, 2031 KiB  
Article
Monitoring and Analyzing Driver Physiological States Based on Automotive Electronic Identification and Multimodal Biometric Recognition Methods
by Shengpei Zhou, Nanfeng Zhang, Qin Duan, Xiaosong Liu, Jinchao Xiao, Li Wang and Jingfeng Yang
Algorithms 2024, 17(12), 547; https://doi.org/10.3390/a17120547 - 2 Dec 2024
Cited by 2 | Viewed by 1312
Abstract
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to [...] Read more.
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to efficiently monitor the driver’s heart rate, breathing frequency, emotional state, and fatigue level, providing real-time feedback to intelligent driving systems to enhance driving safety. First, considering the precision, adaptability, and real-time capabilities of current physiological signal monitoring devices, an intelligent cushion integrating MEMSs (Micro-Electro-Mechanical Systems) and optical sensors is designed. This cushion collects heart rate and breathing frequency data in real time without disrupting the driver, while an electrodermal activity monitoring system captures electromyography data. The sensor layout is optimized to accommodate various driving postures, ensuring accurate data collection. The EVI system assigns a unique identifier to each vehicle, linking it to the physiological data of different drivers. By combining the driver physiological data with the vehicle’s operational environment data, a comprehensive multi-source data fusion system is established for a driving state evaluation. Secondly, a deep learning model is employed to analyze physiological signals, specifically combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The CNN extracts spatial features from the input signals, while the LSTM processes time-series data to capture the temporal characteristics. This combined model effectively identifies and analyzes the driver’s physiological state, enabling timely anomaly detection. The method was validated through real-vehicle tests involving multiple drivers, where extensive physiological and driving behavior data were collected. Experimental results show that the proposed method significantly enhances the accuracy and real-time performance of physiological state monitoring. These findings highlight the effectiveness of combining EVI with multimodal biometric recognition, offering a reliable means for assessing driver states in intelligent driving systems. Furthermore, the results emphasize the importance of personalizing adjustments based on individual driver differences for more effective monitoring. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 15634 KiB  
Article
Capacitance-Based Untethered Fatigue Driving Recognition Under Various Light Conditions
by Cheng Zeng and Haipeng Wang
Sensors 2024, 24(23), 7633; https://doi.org/10.3390/s24237633 - 29 Nov 2024
Viewed by 756
Abstract
This study proposes a capacitance-based fatigue driving recognition method. The proposed method encompasses four principal phases: signal acquisition, pre-processing, blink detection, and fatigue driving recognition. A measurement circuit based on the FDC2214 is designed for the purpose of signal acquisition. The acquired signal [...] Read more.
This study proposes a capacitance-based fatigue driving recognition method. The proposed method encompasses four principal phases: signal acquisition, pre-processing, blink detection, and fatigue driving recognition. A measurement circuit based on the FDC2214 is designed for the purpose of signal acquisition. The acquired signal is initially subjected to pre-processing, whereby noise waves are filtered out. Subsequently, the blink detection algorithm is employed to recognize the characteristics of human blinks. The characteristics of human blink include eye closing time, eye opening time, and idle time. Lastly, the BP neural network is employed to calculate the fatigue driving scale in the fatigue driving recognition stage. Experiments under various working and light conditions are conducted to verify the effectiveness of the proposed method. The results show that high fatigue driving recognition accuracy (92%) can be obtained by the proposed method under various light conditions. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

18 pages, 5504 KiB  
Article
Fatigue Driving State Detection Based on Spatial Characteristics of EEG Signals
by Wenwen Chang, Wenchao Nie, Renjie Lv, Lei Zheng, Jialei Lu and Guanghui Yan
Electronics 2024, 13(18), 3742; https://doi.org/10.3390/electronics13183742 - 20 Sep 2024
Cited by 2 | Viewed by 4156
Abstract
Monitoring the driver’s physical and mental state based on wearable EEG acquisition equipment, especially the detection and early warning of fatigue, is a key issue in the research of the brain–computer interface in human–machine intelligent fusion driving. Comparing and analyzing the waking (alert) [...] Read more.
Monitoring the driver’s physical and mental state based on wearable EEG acquisition equipment, especially the detection and early warning of fatigue, is a key issue in the research of the brain–computer interface in human–machine intelligent fusion driving. Comparing and analyzing the waking (alert) state and fatigue state by simulating EEG data during simulated driving, this paper proposes a brain functional network construction method based on a phase locking value (PLV) and phase lag index (PLI), studies the relationship between brain regions, and quantitatively analyzes the network structure. The characteristic parameters of the brain functional network that have significant differences in fatigue status are screened out and constitute feature vectors, which are then combined with machine learning algorithms to complete classification and identification. The experimental results show that this method can effectively distinguish between alertness and fatigue states. The recognition accuracy rates of 52 subjects are all above 70%, with the highest recognition accuracy reaching 89.5%. Brain network topology analysis showed that the connectivity between brain regions was weakened under a fatigue state, especially under the PLV method, and the phase synchronization relationship between delta and theta frequency bands was significantly weakened. The research results provide a reference for understanding the interdependence of brain regions under fatigue conditions and the development of fatigue driving detection systems. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

35 pages, 6064 KiB  
Article
Multi-Index Driver Drowsiness Detection Method Based on Driver’s Facial Recognition Using Haar Features and Histograms of Oriented Gradients
by Eduardo Quiles-Cucarella, Julio Cano-Bernet, Lucas Santos-Fernández, Carlos Roldán-Blay and Carlos Roldán-Porta
Sensors 2024, 24(17), 5683; https://doi.org/10.3390/s24175683 - 31 Aug 2024
Cited by 6 | Viewed by 2231
Abstract
It is estimated that 10% to 20% of road accidents are related to fatigue, with accidents caused by drowsiness up to twice as deadly as those caused by other factors. In order to reduce these numbers, strategies such as advertising campaigns, the implementation [...] Read more.
It is estimated that 10% to 20% of road accidents are related to fatigue, with accidents caused by drowsiness up to twice as deadly as those caused by other factors. In order to reduce these numbers, strategies such as advertising campaigns, the implementation of driving recorders in vehicles used for road transport of goods and passengers, or the use of drowsiness detection systems in cars have been implemented. Within the scope of the latter area, the technologies used are diverse. They can be based on the measurement of signals such as steering wheel movement, vehicle position on the road, or driver monitoring. Driver monitoring is a technology that has been exploited little so far and can be implemented in many different approaches. This work addresses the evaluation of a multidimensional drowsiness index based on the recording of facial expressions, gaze direction, and head position and studies the feasibility of its implementation in a low-cost electronic package. Specifically, the aim is to determine the driver’s state by monitoring their facial expressions, such as the frequency of blinking, yawning, eye-opening, gaze direction, and head position. For this purpose, an algorithm capable of detecting drowsiness has been developed. Two approaches are compared: Facial recognition based on Haar features and facial recognition based on Histograms of Oriented Gradients (HOG). The implementation has been carried out on a Raspberry Pi, a low-cost device that allows the creation of a prototype that can detect drowsiness and interact with peripherals such as cameras or speakers. The results show that the proposed multi-index methodology performs better in detecting drowsiness than algorithms based on one-index detection. Full article
(This article belongs to the Special Issue Sensors and Systems for Automotive and Road Safety (Volume 2))
Show Figures

Figure 1

15 pages, 2169 KiB  
Article
FPIRST: Fatigue Driving Recognition Method Based on Feature Parameter Images and a Residual Swin Transformer
by Weichu Xiao, Hongli Liu, Ziji Ma, Weihong Chen and Jie Hou
Sensors 2024, 24(2), 636; https://doi.org/10.3390/s24020636 - 19 Jan 2024
Cited by 2 | Viewed by 1598
Abstract
Fatigue driving is a serious threat to road safety, which is why accurately identifying fatigue driving behavior and warning drivers in time are of great significance in improving traffic safety. However, accurately recognizing fatigue driving is still challenging due to large intra-class variations [...] Read more.
Fatigue driving is a serious threat to road safety, which is why accurately identifying fatigue driving behavior and warning drivers in time are of great significance in improving traffic safety. However, accurately recognizing fatigue driving is still challenging due to large intra-class variations in facial expression, continuity of behaviors, and illumination conditions. A fatigue driving recognition method based on feature parameter images and a residual Swin Transformer is proposed in this paper. First, the face region is detected through spatial pyramid pooling and a multi-scale feature output module. Then, a multi-scale facial landmark detector is used to locate 23 key points on the face. The aspect ratios of the eyes and mouth are calculated based on the coordinates of these key points, and a feature parameter matrix for fatigue driving recognition is obtained. Finally, the feature parameter matrix is converted into an image, and the residual Swin Transformer network is presented to recognize fatigue driving. Experimental results on the HNUFD dataset show that the proposed method achieves an accuracy of 96.512%, thus outperforming state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 2463 KiB  
Article
IoT-Assisted Automatic Driver Drowsiness Detection through Facial Movement Analysis Using Deep Learning and a U-Net-Based Architecture
by Shiplu Das, Sanjoy Pratihar, Buddhadeb Pradhan, Rutvij H. Jhaveri and Francesco Benedetto
Information 2024, 15(1), 30; https://doi.org/10.3390/info15010030 - 2 Jan 2024
Cited by 19 | Viewed by 6859
Abstract
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching [...] Read more.
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching an accident site. In addition, it would be advantageous for people to be promptly alerted in real time before the occurrence of any calamitous events affecting multiple people. The implementation of Internet-of-Things (IoT) technology in driver action recognition has become imperative due to the ongoing advancements in Artificial Intelligence (AI) and deep learning (DL) within Advanced Driver Assistance Systems (ADAS), which are significantly transforming the driving encounter. This work presents a deep learning model that utilizes a CNN–Long Short-Term Memory network to detect driver sleepiness. We employ different algorithms on datasets such as EM-CNN, VGG-16, GoogLeNet, AlexNet, ResNet50, and CNN-LSTM. The aforementioned algorithms are used for classification, and it is evident that the CNN-LSTM algorithm exhibits superior accuracy compared to alternative deep learning algorithms. The model is provided with video clips of a certain period, and it distinguishes the clip by analyzing the sequence of motions exhibited by the driver in the video. The key objective of this work is to promote road safety by notifying drivers when they exhibit signs of drowsiness, minimizing the probability of accidents caused by fatigue-related disorders. It would help in developing an ADAS that is capable of detecting and addressing driver tiredness proactively. This work intends to limit the potential dangers associated with drowsy driving, hence promoting enhanced road safety and a decrease in accidents caused by fatigue-related variables. This work aims to achieve high efficacy while maintaining a non-intrusive nature. This work endeavors to offer a non-intrusive solution that may be seamlessly integrated into current automobiles, hence enhancing accessibility to a broader spectrum of drivers through the utilization of facial movement analysis employing CNN-LSTM and a U-Net-based architecture. Full article
Show Figures

Figure 1

18 pages, 3182 KiB  
Article
EEG and ECG-Based Multi-Sensor Fusion Computing for Real-Time Fatigue Driving Recognition Based on Feedback Mechanism
by Ling Wang, Fangjie Song, Tie Hua Zhou, Jiayu Hao and Keun Ho Ryu
Sensors 2023, 23(20), 8386; https://doi.org/10.3390/s23208386 - 11 Oct 2023
Cited by 18 | Viewed by 4841
Abstract
A variety of technologies that could enhance driving safety are being actively explored, with the aim of reducing traffic accidents by accurately recognizing the driver’s state. In this field, three mainstream detection methods have been widely applied, namely visual monitoring, physiological indicator monitoring [...] Read more.
A variety of technologies that could enhance driving safety are being actively explored, with the aim of reducing traffic accidents by accurately recognizing the driver’s state. In this field, three mainstream detection methods have been widely applied, namely visual monitoring, physiological indicator monitoring and vehicle behavior analysis. In order to achieve more accurate driver state recognition, we adopted a multi-sensor fusion approach. We monitored driver physiological signals, electroencephalogram (EEG) signals and electrocardiogram (ECG) signals to determine fatigue state, while an in-vehicle camera observed driver behavior and provided more information for driver state assessment. In addition, an outside camera was used to monitor vehicle position to determine whether there were any driving deviations due to distraction or fatigue. After a series of experimental validations, our research results showed that our multi-sensor approach exhibited good performance for driver state recognition. This study could provide a solid foundation and development direction for future in-depth driver state recognition research, which is expected to further improve road safety. Full article
(This article belongs to the Special Issue Advanced-Sensors-Based Emotion Sensing and Recognition)
Show Figures

Figure 1

26 pages, 7487 KiB  
Article
Drivers’ Comprehensive Emotion Recognition Based on HAM
by Dongmei Zhou, Yongjian Cheng, Luhan Wen, Hao Luo and Ying Liu
Sensors 2023, 23(19), 8293; https://doi.org/10.3390/s23198293 - 7 Oct 2023
Cited by 7 | Viewed by 2964
Abstract
Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, [...] Read more.
Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, a single modality may not be able to fully consider a driver’s complete emotional characteristics and provides poor robustness. In recent years, some studies have used multimodal thinking to monitor single emotions such as driver fatigue and anger, but in actual driving environments, negative emotions such as sadness, anger, fear, and fatigue all have a significant impact on driving safety. However, there are very few research cases using multimodal data to accurately predict drivers’ comprehensive emotions. Therefore, based on the multi-modal idea, this paper aims to improve drivers’ comprehensive emotion recognition. By combining the three modalities of a driver’s voice, facial image, and video sequence, the six classification tasks of drivers’ emotions are performed as follows: sadness, anger, fear, fatigue, happiness, and emotional neutrality. In order to accurately identify drivers’ negative emotions to improve driving safety, this paper proposes a multi-modal fusion framework based on the CNN + Bi-LSTM + HAM to identify driver emotions. The framework fuses feature vectors of driver audio, facial expressions, and video sequences for comprehensive driver emotion recognition. Experiments have proved the effectiveness of the multi-modal data proposed in this paper for driver emotion recognition, and its recognition accuracy has reached 85.52%. At the same time, the validity of this method is verified by comparing experiments and evaluation indicators such as accuracy and F1 score. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

22 pages, 5540 KiB  
Article
Research on Fatigued-Driving Detection Method by Integrating Lightweight YOLOv5s and Facial 3D Keypoints
by Xiansheng Ran, Shuai He and Rui Li
Sensors 2023, 23(19), 8267; https://doi.org/10.3390/s23198267 - 6 Oct 2023
Cited by 7 | Viewed by 2443
Abstract
In response to the problem of high computational and parameter requirements of fatigued-driving detection models, as well as weak facial-feature keypoint extraction capability, this paper proposes a lightweight and real-time fatigued-driving detection model based on an improved YOLOv5s and Attention Mesh 3D keypoint [...] Read more.
In response to the problem of high computational and parameter requirements of fatigued-driving detection models, as well as weak facial-feature keypoint extraction capability, this paper proposes a lightweight and real-time fatigued-driving detection model based on an improved YOLOv5s and Attention Mesh 3D keypoint extraction method. The main strategies are as follows: (1) Using Shufflenetv2_BD to reconstruct the Backbone network to reduce parameter complexity and computational load. (2) Introducing and improving the fusion method of the Cross-scale Aggregation Module (CAM) between the Backbone and Neck networks to reduce information loss in shallow features of closed-eyes and closed-mouth categories. (3) Building a lightweight Context Information Fusion Module by combining the Efficient Multi-Scale Module (EAM) and Depthwise Over-Parameterized Convolution (DoConv) to enhance the Neck network’s ability to extract facial features. (4) Redefining the loss function using Wise-IoU (WIoU) to accelerate model convergence. Finally, the fatigued-driving detection model is constructed by combining the classification detection results with the thresholds of continuous closed-eye frames, continuous yawning frames, and PERCLOS (Percentage of Eyelid Closure over the Pupil over Time) of eyes and mouth. Under the premise that the number of parameters and the size of the baseline model are reduced by 58% and 56.3%, respectively, and the floating point computation is only 5.9 GFLOPs, the average accuracy of the baseline model is increased by 1%, and the Fatigued-recognition rate is 96.3%, which proves that the proposed algorithm can achieve accurate and stable real-time detection while lightweight. It provides strong support for the lightweight deployment of vehicle terminals. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Figure 1

15 pages, 3870 KiB  
Article
Vortex-Induced Vibration Recognition for Long-Span Bridges Based on Transfer Component Analysis
by Jiale Hou, Sugong Cao, Hao Hu, Zhenwei Zhou, Chunfeng Wan, Mohammad Noori, Puyu Li and Yinan Luo
Buildings 2023, 13(8), 2012; https://doi.org/10.3390/buildings13082012 - 7 Aug 2023
Cited by 5 | Viewed by 2079
Abstract
Bridge vortex-induced vibration (VIV) refers to the vertical resonance phenomenon that occurs in a bridge when pulsating wind passes over it and causes vortices to detach. In recent years, VIV events have been observed in numerous long-span bridges, leading to fatigue damage to [...] Read more.
Bridge vortex-induced vibration (VIV) refers to the vertical resonance phenomenon that occurs in a bridge when pulsating wind passes over it and causes vortices to detach. In recent years, VIV events have been observed in numerous long-span bridges, leading to fatigue damage to the bridge structure and posing risks to driving safety. The advancement of technologies such as structural health monitoring (SHM), machine learning, and big data has opened up new research avenues for the intelligent identification of VIV in bridges. Machine learning algorithms can accurately identify the VIV events from historical data accumulated by SHM systems, thus providing an effective method for VIV recognition. Nevertheless, the existing identification methods have limitations, particularly in their applicability to bridges lacking historical VIV data. This study introduces an adaptive VIV recognition method in the main girders of long-span suspension bridges based on Transfer Component Analysis (TCA). The method can accurately identify VIV patterns in real-time or in historical data, even when specific VIV data are not available for the target bridge. The proposed method exhibits suitability for multiple long-span bridges. Experimental validation is performed using the SHM datasets from two long-span suspension bridges. The results show that the proposed VIV identification method can recognize more VIV samples compared to the benchmark model. When using sensor 1 data of bridge B as the source domain to identify the VIV of the L-section of bridge A, the F1 score of the TCA-based method is 0.836, while the F1 score of the benchmark model is 0.165. In the other 11 cases, the F1 score of the proposed model is higher than 0.8, which demonstrates the method’s robust generalization capabilities. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

25 pages, 10281 KiB  
Article
A Feature Fusion Method for Driving Fatigue of Shield Machine Drivers Based on Multiple Physiological Signals and Auto-Encoder
by Kun Liu, Guoqi Feng, Xingyu Jiang, Wenpeng Zhao, Zhiqiang Tian, Rizheng Zhao and Kaihang Bi
Sustainability 2023, 15(12), 9405; https://doi.org/10.3390/su15129405 - 12 Jun 2023
Cited by 7 | Viewed by 2123
Abstract
The driving fatigue state of shield machine drivers directly affects the safe operation and tunneling efficiency of shield machines during metro construction. To cope with the problem that it is challenging to simulate the working conditions and operation process of shield machine drivers [...] Read more.
The driving fatigue state of shield machine drivers directly affects the safe operation and tunneling efficiency of shield machines during metro construction. To cope with the problem that it is challenging to simulate the working conditions and operation process of shield machine drivers using driving simulation platforms and that the existing fatigue feature fusion methods usually show low recognition accuracy, shield machine drivers at Shenyang metro line 4 in China were taken as the research subjects, and a multi-modal physiological feature fusion method based on an L2-regularized stacked auto-encoder was designed. First, the ErgoLAB cloud platform was used to extract the combined energy feature (E), the reaction time, the HRV (heart rate variability) time-domain SDNN (standard deviation of normal-to-normal intervals) index, the HRV frequency-domain LF/HF (energy ratio of low frequency to high frequency) index and the pupil diameter index from EEG (electroencephalogram) signals, skin signals, pulse signals and eye movement data, respectively. Second, the physiological signal characteristics were extracted based on the WPT (wavelet packet transform) method and time–frequency analysis. Then, a method for driving fatigue feature fusion based on an auto-encoder was designed aiming at the characteristics of the L2-regularization method to solve the over-fitting problem of small sample data sets in the process of model training. The optimal hyper-parameters of the model were verified with the experimental method of the control variable, which reduces the loss of multi-modal feature data in compression fusion and the information loss rate of the fused index. The results show that the method proposed outperforms its competitors in recognition accuracy and can effectively reduce the loss rate of deep features in existing decision-making-level fusion. Full article
Show Figures

Figure 1

15 pages, 8253 KiB  
Article
Adaptive Driver Face Feature Fatigue Detection Algorithm Research
by Han Zheng, Yiding Wang and Xiaoming Liu
Appl. Sci. 2023, 13(8), 5074; https://doi.org/10.3390/app13085074 - 18 Apr 2023
Cited by 21 | Viewed by 3423
Abstract
Fatigued driving is one of the leading causes of traffic accidents, and detecting fatigued driving effectively is critical to improving driving safety. Given the variety and individual variability of the driving surroundings, the drivers’ states of weariness, and the uncertainty of the key [...] Read more.
Fatigued driving is one of the leading causes of traffic accidents, and detecting fatigued driving effectively is critical to improving driving safety. Given the variety and individual variability of the driving surroundings, the drivers’ states of weariness, and the uncertainty of the key characteristic factors, in this paper, we propose a deep-learning-based study of the MAX-MIN driver fatigue detection algorithm. First, the ShuffleNet V2K16 neural network is used for driver face recognition, which eliminates the influence of poor environmental adaptability in fatigue detection; second, ShuffleNet V2K16 is combined with Dlib to obtain the coordinates of driver face feature points; and finally, the values of EAR and MAR are obtained by comparing the first 100 frames of images to EAR-MAX and MAR-MIN. Our proposed method achieves 98.8% precision, 90.2% recall, and 94.3% F-Score in the actual driving scenario application. Full article
(This article belongs to the Special Issue Computation and Complex Data Processing Systems)
Show Figures

Figure 1

Back to TopTop