Next Article in Journal
Plasma Treatment to Remove Titanium Surface Contaminants and Improve Implant Biocompatibility: An In Vitro Study
Previous Article in Journal
Design and Experimental Validation of Pipeline Defect Detection in Low-Illumination Environments Based on Bionic Visual Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection

College of Electromechanical Engineering, Harbin Engineering University, Harbin 150009, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 570; https://doi.org/10.3390/biomimetics10090570
Submission received: 2 June 2025 / Revised: 2 July 2025 / Accepted: 21 August 2025 / Published: 27 August 2025
(This article belongs to the Section Biomimetic Design, Constructions and Devices)

Abstract

With the rapid advancement of industrial intelligence, ensuring occupational safety has become an increasingly critical concern. Among the essential personal protective equipment (PPE), safety helmets play a vital role in preventing head injuries. There is a growing demand for real-time detection of helmet chinstrap wearing status during industrial operations. However, existing detection methods often encounter limitations such as user discomfort or potential privacy invasion. To overcome these challenges, this study proposes a non-intrusive approach for detecting the wearing state of helmet chinstraps, inspired by the mechanosensory hair arrays found on spider legs. The proposed method utilizes multiple MEMS inertial sensors to emulate the sensory functionality of spider leg hairs, thereby enabling efficient acquisition and analysis of helmet wearing states. Unlike conventional vibration-based detection techniques, posture signals reflect spatial structural characteristics; however, their integration from multiple sensors introduces increased signal complexity and background noise. To address this issue, an improved adaptive convolutional neural network (ICNN) integrated with a long short-term memory (LSTM) network is employed to classify the tightness levels of the helmet chinstrap using both single-sensor and multi-sensor data. Experimental validation was conducted based on data collected from 20 participants performing wall-climbing robot operation tasks. The results demonstrate that the proposed method achieves a high recognition accuracy of 96%. This research offers a practical, privacy-preserving, and highly effective solution for helmet-wearing status monitoring in industrial environments.

1. Introduction

Intelligent manufacturing, as a cornerstone of Industry 4.0, is fundamentally reshaping the ways in which humans work and live [1]. With the ongoing advancement of intelligent industrialization, safety management on industrial sites is encountering unprecedented challenges. Industrial accidents not only pose risks of severe injuries or fatalities but can also significantly disrupt production schedules and negatively affect enterprise profitability [2]. According to data from the U.S. Bureau of Labor Statistics, in 2020, head injuries accounted for nearly 6% of non-fatal occupational injuries resulting in days away from work [3]. Therefore, safety helmets—among the most widely used forms of personal protective equipment (PPE)—play an essential role in safeguarding workers’ head health on construction and industrial sites. Although the majority of workers comply with mandatory helmet-wearing regulations, monitoring the proper usage of safety helmets, particularly the correct fastening of chin straps, remains a significant challenge in occupational safety management [4,5].
Traditional safety compliance monitoring methods primarily rely on visual inspection through video surveillance [6,7], which can determine whether a worker is wearing a helmet but often fails to accurately assess the correctness of its usage—particularly in complex working environments where system accuracy and real-time performance frequently fall short of safety management standards. Moreover, image-based detection requires stable lighting conditions and consistent head and facial orientations [8,9], making it poorly suited for real-world operational scenarios. Notably, facial data constitutes personally identifiable information [10], raising significant privacy concerns. Therefore, developing an efficient and privacy-preserving method for assessing helmet compliance remains a critical challenge.
Spiders possess highly developed mechanosensory systems, with their leg-based sensory organs serving as influential models in mechanical vibration perception research [11]. Further studies have shown that the anterior legs of spiders contain diverse hair-like sensory structures that enhance stimulus perception [12,13]. Inspired by the sensing mechanisms of spider leg joints, researchers have developed ultra-sensitive mechanical crack sensors [14], capable of detecting strain and vibration signals with high mechanical sensitivity. Similarly, a tunable, highly sensitive epidermal sensor has been proposed [15], drawing inspiration from the natural ability of spider leg posture adjustments to actively modulate sensory sensitivity. These examples highlight the important role of spider leg sensory systems—particularly the hair arrays—in mechanical perception. Although current bioinspired applications based on spider sensing mainly focus on sensor development, there is limited exploration at the level of sensory strategies, indicating a need for further investigation into biomimetic sensing frameworks.
In recent years, with the advancement of sensing technologies, MEMS (Micro-Electro-Mechanical Systems) sensors have become a key area of research in industrial safety monitoring due to their low power consumption, high precision, and strong anti-interference capabilities. Particularly in dynamic detection applications [16], MEMS sensors enable real-time capture of helmet posture and movement changes, offering more accurate recognition of wearing states compared to traditional approaches. Continuous monitoring of workers’ helmet-wearing conditions can effectively detect and alert improper fastening or loosening of chinstraps, thereby mitigating safety risks associated with non-standard usage.
MEMS attitude sensors play a crucial role in various domains such as robot balance control, human motion analysis, and aircraft attitude measurement. As a subset of inertial measurement units (IMUs), these sensors are widely applied in mobile robotics [17], human–robot collaboration [18], and other fields to detect real-time movement and attitude variations [19,20,21]. MEMS attitude sensors collect data signals including angular velocity, acceleration, and magnetic field intensity, which are essential for monitoring the operational status of humans or mechanical equipment [22]. However, in long-term state detection scenarios, MEMS attitude sensors often suffer from drift and reduced accuracy, making effective data preprocessing and appropriate model selection both critical and necessary [23].
Traditional machine learning approaches have achieved some success in robot fault diagnosis but are constrained by long computation times, relatively low accuracy, and dependence on manual feature extraction [24,25]. Deep learning methods, particularly hybrid models combining convolutional neural networks (CNNs) and long short-term memory (LSTM) networks [26,27], have emerged as powerful tools for sequential signal classification due to their robust feature extraction and temporal modeling capabilities. Nevertheless, CNN architectures tend to be highly complex, involving numerous hyperparameters [28,29]. Consequently, designing adaptive CNN models is considered an optimal strategy [30,31].
This study proposes a biomimetic helmet chinstrap tension detection system inspired by the sensory hairs on spider legs. By simulating the sensing process of spider leg hair arrays, a hierarchical perception strategy is designed using multiple attitude sensors to collect helmet posture data for identifying the tightness of helmet chinstraps. The specific contributions include the following: (1) development of a helmet-wearing detection system based on a bioinspired multi-layered attitude sensor configuration modeled after spider tactile sensing, utilizing three MEMS attitude sensors to acquire workers’ head posture features; (2) design of an efficient data processing algorithm for classifying and recognizing helmet wearing states; (3) experimental validation of the system’s performance under controlled laboratory conditions. Compared to existing methods, the proposed system enables continuous and real-time detection of helmet-wearing status while preserving user privacy and overcoming limitations of conventional approaches. Figure 1 illustrates the overall research framework.
This paper is organized as follows: Section 2 introduces the bioinspired helmet chinstrap tension detection strategy based on spider leg hair arrays; Section 3 describes the implementation of the helmet chinstrap tension recognition system and presents an improved CNN-LSTM model for classification of helmet-wearing states; Section 4 details the experimental data collection procedures and testing conditions; Section 5 presents the experimental setup and results analysis; finally, Section 6 concludes this paper and outlines future research directions.

2. Detection Strategy for Helmet Chin Strap Tightness Inspired by Spiny Leg Bristle Sensory Mechanisms

This section investigates the feasibility of applying biomimetic design principles inspired by the sensory mechanisms of spider leg bristles to develop an effective method for detecting the tightness of helmet chin straps. The exploration focuses on analyzing the underlying principles and key constraints governing chin strap tension.

2.1. Biomimetic Design of the Sensory Strategy Based on Spider Leg Bristles

Spider legs are equipped with highly sensitive tactile organs capable of detecting minute vibrations and mechanical stimuli. These sensory structures exhibit exceptional responsiveness to subtle variations in force intensity. Spiders utilize such tactile signals to discern the position, size, and movement patterns of prey, thereby enabling rapid behavioral responses during predation. The tactile organs on spider legs demonstrate remarkable sensitivity, allowing for the detection of extremely small forces and vibrational cues. Each leg contains multiple sensory setae, known as trichobothria, which are structurally composed of long or short, thin hairs embedded within cup-shaped bases lined with a flexible membrane. Each individual seta provides specific information regarding the spatial orientation and motion dynamics of prey relative to the spider [32]. This heightened sensitivity is attributed to the variation in setal lengths, which confer differential responsiveness to diverse force magnitudes and vibrational frequencies. Analogously, in the context of safety helmets, the dynamic interactions between the helmet and the wearer’s head vary under different wearing conditions—particularly when the tightness of the chin strap changes. Detecting these variations can facilitate accurate assessment of the helmet’s fit status.
Inspired by this biological mechanism, we developed a sensor system that mimics the vibration-sensitive characteristics of three distinct types of bristles found on spider legs. Each sensor demonstrates differential sensitivity to the frequency and amplitude of mechanical vibrations, replicating the response behavior of spider setae to varying stimulus intensities. These sensors were mounted on the outer surface of the safety helmet shell to enable high-sensitivity monitoring of its vibrational response. As illustrated in Figure 2, Sensor 1 is directly attached to the helmet surface to emulate short bristles that capture high-frequency vibrations; Sensor 2 is connected via a shorter fiber rod, simulating medium-length bristles that respond to mid-range frequencies; Sensor 3 is installed using a longer fiber rod, mimicking the low-frequency sensitivity of long bristles. Detailed installation parameters for each sensor are provided in Table 1.

2.2. Principle of Chinstrap Tightness Recognition Under Safety Helmet

To simulate the dynamic interaction between the helmet and the wearer’s head, we employed a classical spring–damper model. In this modeling framework, the spring component represents the elastic contact forces between the helmet and the head, while the damper component accounts for the frictional forces and energy dissipation occurring during helmet–skin contact. By applying this model, we are able to simulate the relative motion between the helmet and the head under different levels of chinstrap tightness. The underlying theoretical formulation is described as follows:
(1)
Dynamic Coupling and Relative Motion Model Between Helmet and Head
When the helmet is secured to the head via the chinstrap—forming either a rigid or semi-flexible connection—their coupled motion can be characterized using rigid body dynamics. Assuming the head’s motion trajectory as the reference frame, the helmet’s acceleration can be expressed in terms of its relative components.
a helmet = a head + a relative
where a head is the true acceleration of the head, and a relative is the relative acceleration between helmet and head.
By applying the Newton–Euler equations, the dynamic relationship of the helmet-head system is established as follows:
m helmet · a relative = F strap c · v relative U ( x )
where m helmet represents the helmet mass, F strap = k · F tension is the chinstrap tension, c is the system damping coefficient, and U ( x ) represents helmet displacement related to potential energy.
It can be observed that when the tightness k decreases, a relative increases, alongside a marked rise in high-frequency attitude components.
(2)
Attitude Response and Damping Characteristic Analysis
Modeling the helmet–head system as a mass–spring–damper system, its equation of motion is expressed as follows:
m x ¨ ( t ) + c ( k ) x ˙ + k s x = F e x t ( t )
where c ( k ) is the effective damping coefficient correlated with chinstrap tightness, k s is the system stiffness, and  F ext ( t ) is the external excitation force.
By conducting frequency domain response analysis, the transfer function can be derived as follows:
H ( ω ) = 1 k s m ω 2 2 + ( c ( k ) ω ) 2
Assuming constants k s , m , ω , and varying k from 0 to 1 in increments of 0.25, the corresponding transfer functions are plotted in Figure 3.
From the results, it is evident that when chinstrap tightness is insufficient (i.e., k decreases), the system damping diminishes, causing a reduction in the transfer function gain across the frequency spectrum, particularly within the low-frequency range. This implies that smaller k values weaken the system’s response to low-frequency signals.

3. Implementation Scheme of Helmet Chin Strap Tightness Recognition System

This section presents an analysis of MEMS sensor data acquisition for the recognition of helmet chinstrap tightness. It provides a comprehensive description of the mathematical processing techniques applied to the sensor data, as well as an overview of the proposed improved convolutional neural network–long short-term memory (ICNN-LSTM) model. The discussion focuses on the extraction of discriminative features associated with chinstrap tightness from the collected attitude data.

3.1. Definition and Analogy of Helmet Chin Strap Tightness

The objective of this study is to enable helmets to accurately distinguish between different levels of chinstrap tightness during wear. Traditional criteria for defining chinstrap tightness in safety helmets are often subjective and lack precise quantification, making it difficult to establish standardized measurement methods. To address this limitation, this section defines three distinct categories of chinstrap tightness based on equivalent tensile force measurements obtained when a 1.5 cm gap is created between the helmet’s chinstrap and the wearer’s chin. These defined categories are subsequently validated against real-world wearing conditions, as demonstrated in Figure 4.
Based on the defined chinstrap tightness criteria, the wearing status of a safety helmet can be categorized into three distinct classifications:
(1)
Improperly Fastened: The helmet is positioned on the head, but the chinstrap buckle remains unsecured and does not make contact with the wearer’s chin, resulting in a loose fit. When the chinstrap is pulled vertically downward using a force gauge to create a 1.5 cm gap between the strap and the chin, the measured tensile force is less than 1 N.
(2)
Loose Fit: Following manual adjustment of the chinstrap while the helmet is worn, pulling the strap vertically downward to generate a 1.5 cm gap results in a tensile force ranging from 1 N to 3 N.
(3)
Standard Fit: Similarly, after manual adjustment under wearing conditions, the tensile force recorded when pulling the chinstrap vertically downward to form a 1.5 cm gap exceeds 5 N.

3.2. Hardware Composition of the Helmet Chin Strap Tightness Sensing System

The proposed wearing-aware helmet system integrates MEMS attitude sensors into a standard safety helmet, as illustrated in Figure 5, which depicts the physical configuration and the sensor coordinate system. The MEMS attitude sensor consists of a three-axis accelerometerometer, a three-axis gyroscope, and a three-axis magnetometer, enabling real-time monitoring of the amplitude and frequency of workers’ head movements relative to the helmet. The sensor is securely mounted on the helmet and communicates with the data processing module via 5G Bluetooth technology. Identical sensor models with consistent parameters and precision levels are listed in Table 2. It should be noted that in this study, the sampling frequency was set to 100 kHz to ensure maximum sensor measurement accuracy. Moreover, all data used in this research were obtained exclusively from the inertial measurement unit (IMU) of the MEMS sensors. In the data processing pipeline, the three-axis angular acceleration measurements were directly utilized for feature extraction and computational analysis.

3.3. Data Acquisition from Helmet MEMS Attitude Sensors

The low-cost WT9011DCL-BT50 series attitude sensors employed in this study exhibit a residual gravitational acceleration reading of 1 g even after zero-point calibration. Furthermore, the initial orientation of the sensor may vary across different installations, leading to increased variability in the training data and consequently degrading the model’s generalization capability. To reduce the influence of variations in sensor installation direction and gravitational orientation on model training and prediction accuracy, this study scalarizes the three-axis acceleration vector obtained from the sensor. Let a x , a y , and  a z represent the acceleration values along the x, y, and z axes, respectively. The sensor’s output acceleration vector is expressed as a = a x , a y , a z T .
The magnitude of the acceleration vector is calculated and converted into a scalar value, which serves as an input feature to minimize measurement errors caused by inconsistent installation orientations. The computation process is formulated as follows:
a = a x 2 + a y 2 + a z 2
The input signal for the model is then represented by the following equation.
F input = a
In this study, a low-pass Butterworth filter with a cutoff frequency of 1 Hz, a filter order of 5, and a sampling frequency of 500 Hz was employed to preprocess the data. This type of filter is characterized by its maximally flat frequency response and is particularly suitable for applications requiring smooth frequency characteristics. The filter design is based on the following considerations:
H ( s ) = b 0 + b 1 s + + b n s n a 0 + a 1 s + + a n s n
where H ( s ) represents the filter transfer function, b i and a i is the cutoff frequency, and n is the filter order.
After filtering, we use the MinMaxScaler to normalize the data, mapping it to the range [ 0 , 1 ] . The normalization formula is given by the following equation:
x = x x min x max x min
where x s the original data, x min and x max are the minimum and maximum values of the data, and the output is the normalized data.
It should be emphasized that the normalization process is performed after the dataset has been partitioned into training (70%), validation (10%), and testing (20%) subsets. Specifically, the MinMaxScaler is fitted solely on the training set by utilizing its minimum and maximum values to determine the scaling parameters. These parameters are subsequently applied to transform both the training and testing subsets, thereby preventing any potential data leakage.

3.4. Feature Generation

Following the completion of data preprocessing, it is essential to further represent and extract relevant data features. Selecting an appropriate representation method is critically important for subsequent computational processing. In this study, the MEMS sensing unit operates at a sampling frequency of 100 kHz. To facilitate efficient data handling, the collected attitude data is divided into segments, each containing 64 data points, which correspond to one set of attitude measurements from the experimental platform. It should be emphasized that the selection of 64 data points was determined through a balanced consideration of time resolution and computational efficiency. This segment length effectively captures the dynamic characteristics associated with changes in chinstrap tightness while avoiding excessive computational complexity caused by overly long sequences. The chosen segmentation strategy supports the effective extraction of both time-domain and frequency-domain features, thereby providing robust data support for downstream analysis.
A sliding window approach is employed for data segmentation, with an overlapping window configuration to maintain temporal continuity. This method enables better capture of the temporal characteristics of the signal and is particularly valuable for real-time monitoring applications. Subsequently, time-domain and frequency-domain features are extracted from each data segment, yielding a total of 17 distinct feature indicators. Table 3 presents the names and categories of the extracted features. Finally, the feature values are normalized to construct standardized feature vectors for model training.

3.5. Helmet Chinstrap Tightness Recognition Framework Based on ICNN-LSTM Network

To utilize the signals from the MEMS attitude sensors in the helmet for chinstrap tightness recognition, this study proposes a classification process for helmet chinstrap tightness, as shown in Figure 6. The process begins with preprocessing the input attitude signals. Next, the dataset was divided into training (70%), validation (10%), and testing (20%) sets. The MinMaxScaler, fit on the training set, is then used to normalize all sets, ensuring consistency and preventing data leakage. Six-fold cross-validation was employed to assess the performance of the dual model. Subsequently, the proposed ICNN-LSTM model is trained using the training set for classification.
(1)
Feature Extraction: Adaptive Convolutional Neural Network (ICNN)
In the model architecture, the improved convolutional neural network (ICNN) integrates classical one-dimensional convolutional operations with an adaptive rectified linear unit (ReLU) activation function to enhance the network’s representational capacity and classification accuracy. The architecture comprises multiple convolutional blocks, each consisting of a 1D convolutional layer, an adaptive activation layer, a max-pooling layer, and a dropout layer. This design facilitates the extraction of rich and discriminative features for subsequent processing. Within the convolutional layers, standard fixed kernel dimensions and filter counts are employed, specifically a kernel size of 3 and 64 filters, which enable the effective extraction of local features from the input data and capture short-term dependencies in sequential patterns. Following each convolutional block, an adaptive ReLU activation layer is introduced to further improve the non-linear expressive capability of the network. The mathematical formulation of this adaptive activation function is defined as follows:
p l i ( t ) = max 0 , z l i ( t ) + α l i min 0 , z l i ( t )
Let p l i ( t ) represent the output of the i-th neuron in the l-th layer after applying the adaptive ReLU activation function, z l i ( t ) denote the input feature values to the batch normalization (BN) layer, and let α l i be the adaptive parameter to be learned by the layer. The design of adaptive ReLU aims to enhance the network’s ability to handle negative values. By utilizing an adaptive negative slope, it maintains non-linearity while increasing the model’s ability to adjust feature distributions.
Moreover, to prevent feature redundancy and overfitting, each convolutional block also includes a max pooling layer and a dropout layer. The max pooling layer employs a pooling window of size 1 to reduce feature dimensions and enhance local feature selectivity, while the dropout layer discards 20% of the neurons in each convolutional block to further mitigate the risk of overfitting.
(2)
Dangerous State Classification: Long Short-Term Memory (LSTM) Layer
The LSTM layer used in the convolutional and LSTM integrated neural network (ICNN-LSTM) is designed to extract temporal features and further model the temporal dependencies of local features encoded in the convolutional feature maps. The LSTM layer consists of multiple memory units, which regulate information updates at each time step through input gates, forget gates, and output gates. First, the output of the convolutional block is fed into the LSTM layer, where the LSTM units process the information at each time step sequentially to form a comprehensive temporal feature representation.
The output of each LSTM unit is produced by the combination of the internal memory unit c t and the output gate o t . The update process of the memory unit c t is as follows:
c t = f t c t 1 + i t c t
where f t , i t , and  c t represent the forget gate, input gate, and candidate memory cell state, respectively. The forget gate determines which past information to retain or discard, the input gate controls the degree of current information storage, and the candidate memory cell state holds the new information temporarily. Finally, the output of the LSTM unit h t is generated through the output gate o t , calculated as follows:
h t = o t tanh c t
Through this gating mechanism, the LSTM can dynamically update its state at each time step, capturing long-range dependencies in the temporal data.
Lastly, all outputs from the LSTM layer are passed to the fully connected layer via a flattening operation, generating a final discriminative feature map x out . This feature map can further be input into a classifier composed of fully connected layers and a Softmax layer to obtain the classification probability outputs of the time-series data. Additionally, this feature map can be used for cross-domain feature extraction in transfer learning frameworks, enhancing the model’s generalization ability under varying data conditions.
The integration of the ICNN network with the LSTM network forms the ICNN-LSTM network, as shown in Figure 7.
Algorithm 1 describes the implementation process of the ICNN-LSTM classification algorithm.
Algorithm 1: ICNN-LSTM Model Training and Evaluation
Biomimetics 10 00570 i001

4. Experimental Data Preparation

4.1. Experimental Setup

In this section, to evaluate the effectiveness of the proposed method for sensing and classifying the chinstrap tightness state of a safety helmet, a shipboard wall-climbing robot control test system is employed to simulate real-world worker tasks. As illustrated in Figure 8, the system comprises an operator, an intelligent safety helmet, a wheeled wall-climbing robot, helmet and posture sensors, an arcuate steel plate wall, and a load module. It should be noted that the wheeled wall-climbing robot utilized in this study is based on a dual-wheel differential motion model. Permanent magnets are integrated into the inner sides of the two active wheels and one passive wheel on both sides of the robot to ensure sufficient adhesion when operating on vertical or inclined surfaces. The operator can precisely control the robot’s trajectory by adjusting the differential motion between the two active wheels.
The task simulation experiment is conducted in a controlled laboratory environment, with the steel plate and frame securely anchored. All operators are trained and demonstrate proficient and consistent performance during operation. The experimental protocol requires each operator to control the robot along a predefined path using a wireless controller. The operator must remain seated at a fixed position and maintain continuous visual monitoring of the robot and its surrounding environment until the robot reaches the designated target location, at which point the task is deemed complete.

4.2. Experimental Participants

The experiment was conducted in an indoor laboratory at Harbin Engineering University, where data collection was performed with a total of 10 participants. All participants were graduate students, consisting of 9 males and 1 female. Due to the male-to-female ratio in the laboratory, it was not possible to achieve gender balance in this experiment. All participants were in good physical condition on the day of the test. Before the test began, personal information such as age, gender, height, weight, and body fat percentage for each participant was recorded, as shown in Table 4.
Each participant’s experimental session was structured into three sequential phases:
Step 1: Collection of Baseline Information and Pre-training
At the beginning of the session, participants provided their baseline demographic and health-related information. Researchers then explained the purpose, procedure, equipment, and task requirements to eligible participants. All subjects underwent a pre-training session to familiarize themselves with the control joystick and practice completing the robot transportation task. Following pre-training, each participant took a 10 min rest period to ensure sustained alertness and readiness for the main experiment.
Step 2: Execution of Wall-Climbing Robot Transportation Task and Data Acquisition
Participants were required to wear a safety helmet equipped with a chinstrap secured in the standard configuration. Under controlled experimental conditions, they followed predefined operational guidelines, including remaining at the designated workstation and continuously monitoring the robot. Using the control joystick, participants guided the mobile robot along a specified trajectory to complete the wall-climbing task. The robot’s climbing speed was set at three levels—0.05 m/s, 0.1 m/s, and 0.2 m/s—with three experimental trials conducted at each speed. Throughout the task, the safety helmet continuously collected data from the integrated MEMS sensors, which were transmitted and stored on the host computer.
Step 3: Adjustment of Chinstrap Tightness and Repetition of Experiments
Participants then adjusted the chinstrap to a looser configuration and repeated the procedures outlined in Step 2, with data recorded accordingly. Subsequently, the chinstrap was fully loosened (i.e., unfastened), and the same experimental protocol was executed once more, with data saved for analysis. Each participant completed all three experimental conditions to ensure comprehensive data collection across varying chinstrap tightness levels.

5. Results and Discussion

5.1. Hyperparameter Settings

Subsequently, a CNN-LSTM network was employed to classify participants’ helmet wearing conditions. The dataset was partitioned into training (70%), validation (10%), and testing (20%) subsets to effectively evaluate model performance. Six-fold cross-validation was conducted to assess the classification capability of the dual-model architecture. The experimental results demonstrated that the proposed fine-tuning strategy, with parameters including 30 training epochs, a dropout rate of 0.2, a batch size of 32, and a learning rate of 0.001, achieved the highest classification accuracy.
(1)
Learning Rate
The learning rate governs the convergence behavior of the objective function during optimization. An appropriately chosen learning rate facilitates efficient convergence to a local minimum. Based on prior empirical experience, a stepwise decreasing learning rate schedule was adopted, starting at 0.1 and reduced by a factor of 10 after each training phase. As illustrated in Figure 9, the training results indicated that after 30 iterations, with the learning rate reduced to 0.001, the classification accuracy surpassed 90%, outperforming configurations using fixed rates of 0.1 and 0.01. Moreover, both the accuracy and loss curves exhibited greater smoothness. Therefore, considering both convergence efficiency and overall accuracy, a learning rate of 0.001 was determined to be optimal.
(2)
Training Batch Size
The training batch size determines how the dataset is partitioned into smaller subsets for each training iteration. If the batch size is too small, it may lead to prolonged training time and unstable convergence; conversely, if it is too large, it may result in underfitting or memory constraints. To identify the optimal batch size, we initiated the experiments with a size of 8 and progressively increased it. The experimental results, as shown in Figure 10, demonstrated that a batch size of 16 achieved the most balanced classification accuracy across all three categories. Consequently, a batch size of 16 was selected as the optimal configuration.
The other main parameters of the ICNN-LSTM model are detailed in Table 5.

5.2. Comparison Between Different Sensors

To demonstrate the effectiveness of the proposed ICNN-LSTM classification model, data were collected using three sensors mounted on the helmet under three different robot operation speeds. Furthermore, five state-of-the-art classification models were selected for comparative analysis, including LSTM, RNN, BP neural network, RF (Random Forest), and KNN (k-Nearest Neighbors). These five models are well established in the field, and detailed descriptions are omitted in this paper for brevity.
(1)
Data Collected with Single Sensor Installation under Different Conditions
Under the single working condition scenario, sensor data from each participant were stored separately according to the sensor position—labeled as Sensor1, Sensor2, and Sensor3. The experimental tasks were conducted at three predefined robot operation speeds, and corresponding data were recorded. These datasets were then classified using both the ICNN-LSTM model and the five benchmark models. All models achieved classification accuracy exceeding 64%. As illustrated in Figure 11, the results indicate a clear correlation between the helmet posture sensor data and the chinstrap tightness state. Moreover, deep learning models demonstrate superior feature extraction capabilities, enabling them to capture more discriminative patterns compared to shallow learning approaches.
(2)
Fusion of Sensor Wear Data
In the application scenario involving fused sensor wear data, the data collected from the three sensors on each participant were concatenated into a unified dataset for classification training. This approach enabled the evaluation of fusion effects across different helmet wearing states. For all experimental tasks, classification performance comparisons were conducted using both the proposed ICNN-LSTM model and the five benchmark models. The results, as presented in Figure 12, indicate that the ICNN-LSTM model maintains excellent classification performance under the multi-sensor fusion setting.
(3)
Analysis of Combined Multi-Sensor Ablation Experiments
To assess the effectiveness and contribution of the bionic strategy proposed in this study—based on a multi-sensor perception framework—an ablation study was conducted. Specifically, data collected at a robot operating speed of 0.05 m/s were used to evaluate model performance under various sensor configurations (single sensor, dual sensors, and full three-sensor fusion). The experimental outcomes were systematically documented in Table 6 (where “1” denotes data from Sensor 1 alone, “1 + 2” represents the fusion of Sensors 1 and 2, and so forth). The results show that the ICNN-LSTM model consistently achieves either the best or joint-best performance across nearly all sensor combinations. Moreover, the classification accuracy of other models also improves with the inclusion of additional sensors. These findings confirm that the proposed bionic strategy, together with the deep fusion architecture, effectively and robustly captures discriminative features from core sensors by analyzing vibration signals at varying frequencies. This functional mechanism emulates the role of spider leg bristle structures with differing lengths, thereby validating the efficacy and robustness of the bionic design.

5.3. Confusion Matrix

The classification results corresponding to a robot operating speed of 0.1 m/s (the most frequently utilized speed in practical scenarios) were selected for plotting the confusion matrix, enabling an evaluation of the diagnostic accuracy of various models across the three chinstrap tightness states. As presented in Figure 13, the x-axis and y-axis represent the predicted labels and true labels, respectively. Analysis of the confusion matrix and associated accuracy metrics reveals that RF, KNN, and BP exhibit relatively poor performance in identifying the “Not fastened properly” state, with KNN yielding the lowest accuracy. This can be attributed to the reduced rigidity in the connection between the head and the helmet lining when the chinstrap is loosely fastened, leading to significant interference in the helmet’s oscillation patterns and consequently complicating the extraction of reliable vibration features.
While RNN and LSTM demonstrate notable improvements in overall recognition accuracy, the classification performance for the “Not fastened properly” state remains suboptimal. In contrast, the proposed ICNN-LSTM model achieves superior classification performance across all three states. Both individual class accuracy and overall accuracy are significantly higher compared to those of the other models, highlighting the effectiveness of the improved architecture in capturing discriminative features under varying chinstrap conditions.

6. Conclusions

This study explores the identification of chinstrap tightness states in safety helmets using biomimetic-inspired methodologies. A practical data collection strategy and state classification model were developed. By analyzing the mechanics of safety helmet usage, a MEMS-based posture data acquisition approach inspired by spider leg bristles was proposed. Multiple MEMS posture sensors were mounted on the helmet’s upper surface to monitor dynamic posture and vibration characteristics. To extract relevant features, an ICNN-LSTM feature extraction and classification framework was introduced. This model combines adaptive CNN with LSTM networks to enhance chinstrap tightness state classification.
In single-sensor data classification, the ICNN-LSTM model achieved over 92% accuracy for Sensor 3 under all conditions, outperforming other fault diagnosis models in noise robustness. In multi-sensor data classification, the model reached 96% accuracy, demonstrating its strong generalization ability. The model is particularly effective in multi-sensor acquisitions inspired by spider leg bristles.
Despite strong performance in controlled environments, some challenges remain for practical implementation. The dependency on pre-collected labeled datasets may limit applicability in data-scarce scenarios, and future work should explore unsupervised or semi-supervised learning approaches. Environmental noise and variations in operational conditions can affect sensor accuracy, so efforts should focus on improving sensor design and adapting machine learning algorithms to dynamic conditions. Future research will prioritize integrating sensors seamlessly into helmet structures and conducting comprehensive field testing to validate the method, address operational issues, and guide improvements. This research aims to advance safety helmet monitoring systems for enhanced occupational safety.

Author Contributions

Conceptualization, Y.Q. and Z.W.; methodology, Y.Q.; software, Z.M.; validation, Z.M., H.X. and J.D.; formal analysis, Z.M. and Z.W.; investigation, Z.M., H.X. and J.D.; resources, H.X.; data curation, Z.M. and J.D.; writing—original draft preparation, Z.M.; writing—review and editing, Z.M.; visualization, Z.M.; supervision, X.Z.; project administration, X.Z.; funding acquisition, H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China 2023YFB470460403, Natural Science Foundation of China under Grant 51875113, and Opening Project of the Key Laboratory of Bionic Engineering (Jilin University, Ministry of Education).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and or analyzed during the current study are not publicly available due to the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  2. Sandaruwan, A.K.L.; Hettige, B. A Comparative Study on Smart Helmet Systems for Mining Tracking and Worker Safety in the Mining Industry: A Review. ResearchGate. 2023. Available online: https://www.researchgate.net/publication/368845525 (accessed on 1 June 2025).
  3. Occupational Safety and Health Administration. OSHA Announces Switch from Traditional Hard Hats to Safety Helmets to Protect Agency Employees from Head Injuries Better. Occupational Safety and Health Administration News Releases. 2023. Available online: https://www.osha.gov/news/newsreleases/trade/12112023 (accessed on 1 June 2025).
  4. Weaver III, A.; Kemp, J.; Ojiambo, W.; Simmons, A. From Hard Hats to Helmets: The History & Future of Head Protection. Prof. Saf. 2024, 69, 34–43. [Google Scholar]
  5. Arai, K.; Beppu, K.; Ifuku, Y.; Oda, M. Method for Detecting the Appropriateness of Wearing a Helmet Chin Strap at Construction Sites. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 179–185. [Google Scholar] [CrossRef]
  6. Lin, B. Safety Helmet Detection Based on Improved YOLOv8. IEEE Access 2024, 12, 28260–28272. [Google Scholar] [CrossRef]
  7. Song, X.; Zhang, T.; Yi, W. An improved YOLOv8 safety helmet wearing detection network. Sci. Rep. 2024, 14, 17550. [Google Scholar] [CrossRef]
  8. Ansari, S.; Du, H.; Naghdy, F.; Stirling, D. Automatic driver cognitive fatigue detection based on upper body posture variations. Expert Syst. Appl. 2022, 203, 117568. [Google Scholar] [CrossRef]
  9. Jiao, X.; Li, C.; Zhang, X.; Fan, J.; Cai, Z.; Zhou, Z.; Wang, Y. Detection Method for Safety Helmet Wearing on Construction Sites Based on UAV Images and YOLOv8. Buildings 2025, 15, 354. [Google Scholar] [CrossRef]
  10. Meden, B.; Rot, P.; Terhörst, P.; Damer, N.; Kuijper, A.; Scheirer, W.J.; Ross, A.; Peer, P.; Štruc, V. Privacy–enhancing face biometrics: A comprehensive survey. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4147–4183. [Google Scholar] [CrossRef]
  11. Barth, F. Spider mechanoreceptors. Curr. Opin. Neurobiol. 2004, 14, 415–422. [Google Scholar] [CrossRef]
  12. Albert, J.; Friedrich, O.; Dechant, H.E.; Barth, F. Arthropod touch reception: Spider hair sensilla as rapid touch detectors. J. Comp. Physiol. A 2001, 187, 303–312. [Google Scholar] [PubMed]
  13. Ganske, A.S.; Uhl, G. The sensory equipment of a spider—A morphological survey of different types of sensillum in both sexes of Argiope bruennichi (Araneae, Araneidae). Arthropod Struct. Dev. 2018, 47, 144–161. [Google Scholar] [CrossRef]
  14. Kang, D.; Pikhitsa, P.V.; Choi, Y.W.; Lee, C.; Shin, S.S.; Piao, L.; Park, B.; Suh, K.Y.; Kim, T.i.; Choi, M. Ultrasensitive mechanical crack-based sensor inspired by the spider sensory system. Nature 2014, 516, 222–226. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, T.; Hong, I.; Roh, Y.; Kim, D.; Kim, S.; Im, S.; Kim, C.; Jang, K.; Kim, S.; Kim, M. Spider-inspired tunable mechanosensor for biomedical applications. NPJ Flex. Electron. 2023, 7, 12. [Google Scholar] [CrossRef]
  16. Iqbal, S.; Malik, A. A review on MEMS based micro displacement amplification mechanisms. Sens. Actuators A Phys. 2019, 300, 111666. [Google Scholar] [CrossRef]
  17. Righettini, P.; Legnani, G.; Cortinovis, F.; Tabaldi, F.; Santinelli, J. Real Time MEMS-Based Joint Friction Identification for Enhanced Dynamic Performance in Robotic Applications. Robotics 2025, 14, 36. [Google Scholar] [CrossRef]
  18. Han, Y.; Zhou, L.; Jiang, W.; Wang, G. Intelligent wheelchair human–robot interactive system based on human posture recognition. J. Mech. Sci. Technol. 2024, 38, 4353–4363. [Google Scholar] [CrossRef]
  19. D’Amato, E.; Nardi, V.A.; Notaro, I.; Scordamaglia, V. A particle filtering approach for fault detection and isolation of UAV IMU sensors: Design, implementation and sensitivity analysis. Sensors 2021, 21, 3066. [Google Scholar] [CrossRef]
  20. Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Yu, Z. IMU-based automated vehicle body sideslip angle and attitude estimation aided by GNSS using parallel adaptive Kalman filters. IEEE Trans. Veh. Technol. 2020, 69, 10668–10680. [Google Scholar] [CrossRef]
  21. Elhashash, M.; Albanwan, H.; Qin, R. A review of mobile mapping systems: From sensors to applications. Sensors 2022, 22, 4262. [Google Scholar] [CrossRef]
  22. Alatise, M.B.; Hancke, G.P. A Review on Challenges of Autonomous Mobile Robot and Sensor Fusion Methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  23. Iqbal, S.; Shakoor, R.I.; Gilani, H.N.; Abbas, H.; Malik, A.M. Performance analysis of microelectromechanical system based displacement amplification mechanism. Iran. J. Sci. Technol. Trans. Mech. Eng. 2019, 43, 507–528. [Google Scholar] [CrossRef]
  24. Rida, I. Feature extraction for temporal signal recognition: An overview. arXiv 2018, arXiv:1812.01780. [Google Scholar] [CrossRef]
  25. Huang, G. Visual-inertial navigation: A concise review. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9572–9582. [Google Scholar]
  26. Sabir, R.; Rosato, D.; Hartmann, S.; Gühmann, C. LSTM Based Bearing Fault Diagnosis of Electrical Machines using Motor Current Signal. In Proceedings of the 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 613–618. [Google Scholar]
  27. Smagulova, K.; James, A.P. A survey on LSTM memristive neural network architectures and applications. Eur. Phys. J. Spec. Top. 2019, 228, 2313–2324. [Google Scholar] [CrossRef]
  28. An, Y.; Zhang, K.; Liu, Q.; Chai, Y.; Huang, X. Rolling Bearing Fault Diagnosis Method Base on Periodic Sparse Attention and LSTM. IEEE Sens. J. 2022, 22, 12044–12053. [Google Scholar] [CrossRef]
  29. Srinivasan, S.; Francis, D.; Mathivanan, S.K.; Rajadurai, H.; Shivahare, B.D.; Shah, M.A. A hybrid deep CNN model for brain tumor image multi-classification. BMC Med. Imaging 2024, 24, 21. [Google Scholar] [CrossRef]
  30. Wang, W.; Chen, Y.; Ghamisi, P. Transferring CNN with adaptive learning for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  31. Ma, Z.; Xu, H.; Dou, J.; Qin, Y.; Zhang, X. Improved ICNN-LSTM Model Classification Based on Attitude Sensor Data for Hazardous State Assessment of Magnetic Adhesion Climbing Wall Robots. arXiv 2024, arXiv:2412.20675. [Google Scholar] [CrossRef]
  32. Bathellier, B.; Steinmann, T.; Barth, F.G.; Casas, J. Air motion sensing hairs of arthropods detect high frequencies at near-maximal mechanical efficiency. J. R. Soc. Interface 2012, 9, 1131–1143. [Google Scholar] [CrossRef]
Figure 1. Overall framework of this research.
Figure 1. Overall framework of this research.
Biomimetics 10 00570 g001
Figure 2. Bionic design and perception mechanism for helmet wearing detection; (a) micrograph of spider leg setae; (b) simplified model of leg setae; (c) equivalent damping model for helmet wearing; (d) bionic design of helmet setae.
Figure 2. Bionic design and perception mechanism for helmet wearing detection; (a) micrograph of spider leg setae; (b) simplified model of leg setae; (c) equivalent damping model for helmet wearing; (d) bionic design of helmet setae.
Biomimetics 10 00570 g002
Figure 3. Transfer functions under different chinstrap tightness coefficients k.
Figure 3. Transfer functions under different chinstrap tightness coefficients k.
Biomimetics 10 00570 g003
Figure 4. Three wearing states and their analogous effects.
Figure 4. Three wearing states and their analogous effects.
Biomimetics 10 00570 g004
Figure 5. Wearable sensory helmet system. (a) MEMS attitude sensors layout; (b) installation position of sensors; (c) sensor coordinate system.
Figure 5. Wearable sensory helmet system. (a) MEMS attitude sensors layout; (b) installation position of sensors; (c) sensor coordinate system.
Biomimetics 10 00570 g005
Figure 6. Flowchart of classification for chin strap tightness of helmets.
Figure 6. Flowchart of classification for chin strap tightness of helmets.
Biomimetics 10 00570 g006
Figure 7. ICNN-LSTM network architecture.
Figure 7. ICNN-LSTM network architecture.
Biomimetics 10 00570 g007
Figure 8. Wall-climbing robot control system.
Figure 8. Wall-climbing robot control system.
Biomimetics 10 00570 g008
Figure 9. Comparison of accuracy and loss during training with different learning rates; (a) accuracy and loss rate with a learning rate of 0.1; (b) accuracy and loss rate with a learning rate of 0.01; (c) accuracy and loss rate with a learning rate of 0.001.
Figure 9. Comparison of accuracy and loss during training with different learning rates; (a) accuracy and loss rate with a learning rate of 0.1; (b) accuracy and loss rate with a learning rate of 0.01; (c) accuracy and loss rate with a learning rate of 0.001.
Biomimetics 10 00570 g009
Figure 10. Accuracy rates corresponding to different training batch values for the three labels.
Figure 10. Accuracy rates corresponding to different training batch values for the three labels.
Biomimetics 10 00570 g010
Figure 11. Comparison of classification accuracy for each model using single sensor at different speeds; (a) ICNN-LSTM model; (b) LSTM model; (c) RNN model; (d) BP model; (e) RF model; (f) KNN model.
Figure 11. Comparison of classification accuracy for each model using single sensor at different speeds; (a) ICNN-LSTM model; (b) LSTM model; (c) RNN model; (d) BP model; (e) RF model; (f) KNN model.
Biomimetics 10 00570 g011
Figure 12. Comparison of classification accuracy for fused sensor data at different speeds.
Figure 12. Comparison of classification accuracy for fused sensor data at different speeds.
Biomimetics 10 00570 g012
Figure 13. Comparison of confusion matrices for different models: (a) ICNN-LSTM model; (b) LSTM model; (c) RNN model; (d) KNN model; (e) RF model; (f) BP model.
Figure 13. Comparison of confusion matrices for different models: (a) ICNN-LSTM model; (b) LSTM model; (c) RNN model; (d) KNN model; (e) RF model; (f) BP model.
Biomimetics 10 00570 g013
Table 1. Sensor installation parameters.
Table 1. Sensor installation parameters.
NumberInstallation LocationFiber Rod Diameter (mm)Fiber Rod Length (mm)
1FrontN/AN/A
2Middle250
3Rear280
Table 2. MEMS attitude sensor parameters and precision.
Table 2. MEMS attitude sensor parameters and precision.
Serial NumberParametersPrecision
1Measurement Range ± 2000 %/s
2Sampling Frequency0.2–100 kHz
3Resolution: 0.0610.061 ( % / s )/(LSB)
4Static Zero Bias ± 0.5 1 % / s
5Temperature Drift ± 0.005 0.015 ( % / s ) / °C
6Sensitivity≤0.015/s RMS
Table 3. Class names and categories for all feature dimensions.
Table 3. Class names and categories for all feature dimensions.
Feature DimensionFeature TypeCategory
1MeanTime Domain
2Standard DeviationTime Domain
3Maximum ValueTime Domain
4Minimum ValueTime Domain
5NormTime Domain
6EnergyTime Domain
7KurtosisTime Domain
8SkewnessTime Domain
9Simple Mean Absolute ValueTime Domain
10AutocorrelationTime Domain
11Autocorrelation Lag 2Time Domain
12Autocorrelation Lag 3Time Domain
13Mean Power FrequencyFrequency Domain
14Median FrequencyFrequency Domain
15Total PowerFrequency Domain
16Maximum Power Spectral DensityFrequency Domain
17Zero Crossing RateFrequency Domain
Table 4. Personal information of participants.
Table 4. Personal information of participants.
NumberAgeGenderHeight (cm)Weight (kg)BMIBody Fat Percentage (%)
124Female1685219.822
224Male1757524.418
325Male1767022.616
426Male1807824.119
524Male1827823.520
625Male1778226.222
724Male1757524.418
826Male1807523.117
927Male1808225.323
1025Male1787824.621
Table 5. Main parameters of the ICNN-LSTM model.
Table 5. Main parameters of the ICNN-LSTM model.
NumberHyperparameterValueNumberHyperparameterValue
1Conv1D Filters646LSTM Units64
2Conv1D Kernel Size37Dropout Rate (LSTM)0.2
3Conv1D ActivationReLU8Dense Layer Units32
4MaxPooling1D Pool Size19Output ActivationSoftmax
5Dropout Rate (Conv1D)0.210ShuffleTrue
Table 6. Ablation experiment result record.
Table 6. Ablation experiment result record.
Model1231 + 21 + 32 + 31 + 2 + 3
ICNN-LSTM0.80.820.940.780.840.870.94
LSTM0.680.750.870.720.750.780.86
RNN0.680.640.860.680.730.760.86
BP0.670.740.820.700.740.780.81
RF0.670.640.740.640.700.700.75
KNN0.490.640.640.600.550.630.64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Z.; Xu, H.; Wang, Z.; Dou, J.; Qin, Y.; Zhang, X. Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection. Biomimetics 2025, 10, 570. https://doi.org/10.3390/biomimetics10090570

AMA Style

Ma Z, Xu H, Wang Z, Dou J, Qin Y, Zhang X. Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection. Biomimetics. 2025; 10(9):570. https://doi.org/10.3390/biomimetics10090570

Chicago/Turabian Style

Ma, Zhen, He Xu, Ziyu Wang, Jielong Dou, Yi Qin, and Xueyu Zhang. 2025. "Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection" Biomimetics 10, no. 9: 570. https://doi.org/10.3390/biomimetics10090570

APA Style

Ma, Z., Xu, H., Wang, Z., Dou, J., Qin, Y., & Zhang, X. (2025). Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection. Biomimetics, 10(9), 570. https://doi.org/10.3390/biomimetics10090570

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop