Next Article in Journal
Analytical Calculation of the No-Load Magnetic Field of a Hybrid Excitation Generator
Next Article in Special Issue
Research on Path Planning Method for Autonomous Patrol Robots
Previous Article in Journal
A Nonlinear Hybrid Modeling Method for Pump Turbines by Integrating Delaunay Triangulation Interpolation and an Improved BP Neural Network
Previous Article in Special Issue
Autonomous Threat Response at the Edge Processing Level in the Industrial Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Height and Heterogeneous Sensor Fusion Discriminant with LSTM for Weak Fire Signal Detection in Large Spaces with High Ceilings

by
Li Wang
1,2,3,*,
Boning Li
1,2,3,
Xiaosheng Yu
3,4 and
Jubo Chen
3,4
1
Shenyang Fire Science and Technology Research Institute of MEM, Shenyang 110034, China
2
National Engineering Research Center of Fire and Emergency Rescue, Shenyang 110034, China
3
Liaoning Province Key Laboratory of Fire Prevention Technology, Shenyang 110034, China
4
Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(13), 2572; https://doi.org/10.3390/electronics13132572
Submission received: 28 May 2024 / Revised: 25 June 2024 / Accepted: 27 June 2024 / Published: 30 June 2024
(This article belongs to the Special Issue Advances in Mobile Networked Systems)

Abstract

:
Fire is a significant cause of fatalities and property loss. In tall spaces, early smoke dispersion is hindered by thermal barriers, and initial flames with limited smoke production may be obscured by ground-level structures. Consequently, smoke, temperature, and other fire sensor signals are weakened, leading to delays in fire detection by sensor networks. This paper proposes a multi-height and heterogeneous fusion discriminant model with a multilayered LSTM structure for the robust detection of weak fire signals in such challenging situations. The model employs three LSTM structures with cross inputs in the first layer and an input-weighted LSTM structure in the second layer to capture the temporal and cross-correlation features of smoke concentration, temperature, and plume velocity sensor data. The third LSTM layer further aggregates these features to extract the spatial correlation patterns among different heights. The experimental results demonstrate that the proposed algorithm can effectively expedite alarm response during sparse smoke conditions and mitigate false alarms caused by weak signals.

1. Introduction

Fire is a sudden and extremely destructive catastrophic event that can occur at any time, regardless of location. Residential areas, commercial buildings, and industrial facilities all face the risk of fire outbreaks [1]. With the rapid development of the economy and construction technology, the number of large buildings, such as shopping centers, museums, exhibition centers, and tall factories, has gradually increased [2,3]. In order to provide different commercial spaces, booths, or operational areas for various purposes, Such kinds of buildings usually include many small compartments with high combustion loads made of wooden or plastic partitions, and other decorative materials or items. Particularly in exhibition buildings, compartments are temporary and frequently dismantled, redesigned, and rearranged for different exhibitions, which leads to neglect of fire safety requirements compared to the permanent parts of the building, increasing fire risk [4,5,6]. Given the high fire risk, complex building structures, and significant internal combustion loads, it is extremely difficult to extinguish a fire in its full combustion stage. Therefore, early detection and prompt response to fires are crucial for effective fire prevention and control.
Traditional fire detection methods for large spaces with high ceilings include the use of image-based fire detectors or flame detectors for flame detection [7,8,9], as well as air-sampling smoke detectors, point-type smoke detectors installed on the ceiling, or linear beam smoke detectors installed at different heights for smoke detection [10,11,12]. Due to the large field of view and visualization capabilities of image-based detectors, along with the rapid development of deep learning algorithms in recent years, large-scale fire detection methods based on images and deep learning have become a research hotspot [13,14,15,16].
However, both image-based fire detection methods and other flame detection methods must directly detect flames. In large spaces with high ceilings, early-stage flames are usually small and can be obstructed by ground partitions, causing delayed alarms. Detecting early fire smoke in these spaces is also challenging. The internal clearance of large high-ceiling buildings is typically very high, often around 20 m and sometimes up to 30 m. As smoke rises, it becomes fully diluted, and image-based fire detection methods may experience delayed alarms due to insufficient color contrast between the smoke and the background. For other smoke detection methods, the thermal barrier effect causes smoke to rise to a certain height and then spread horizontally, as the smoke’s temperature matches the surrounding air, eliminating lift. This results in weak smoke reaching the detectors, further delaying fire detection [17,18,19]. Therefore, the early detection of fires in large high-ceiling spaces with few smoke obstructions and in smoldering fires where smoke is still relatively thin is a significant challenge in advancing fire alarm systems for these environments.
In the field of fire dynamics, numerous physical models have been proposed and validated to understand the distribution and evolution of smoke and temperature in large spaces with high ceilings [20,21,22]. These models reveal intrinsic correlations and variations in smoke concentration, plume velocity, and temperature at different heights. Modeling such correlations allows us to utilize the relationships between different types of sensors at the same location, between samples taken at the same location but at different times, and between samples taken at different locations using the same type of sensor. This approach can enable early detection in large spaces, addressing issues caused by the insufficient response amplitudes of single sensors at specific positions and times, especially in conditions with sparse smoke. Additionally, leveraging these correlations can help suppress false alarms caused by interfering sources. However, deterministic physical models cannot be directly used to distinguish fire signals in real-world scenarios due to the randomness in the variation in smoke and temperature parameters, which is more pronounced in the early stages of fires when the smoke concentration is very low. Furthermore, while multi-sensor detection has been studied for common residential or office settings [23,24], there have been no reports on using the triple correlation of fire smoke in time, space, and sensor type for the early detection of smoke in large spaces.
Consequently, to empower smart building sensor networks to be able to detect weak fire signals in large spaces with high ceilings, we propose a multi-height and heterogeneous sensor fusion discriminant model with a multilayered LSTM structure. LSTM has the ability to effectively process long series of data and capture temporal features and long-term dependencies, making LSTM suitable for detecting the characteristics of long-term response fire signs acquired by sensors. At the same time, the time complexity and accuracy of LSTM are relatively balanced, which is suitable for low-cost hardware and has the potential to be widely popularized and applied from the perspective of the economic feasibility of an engineering application. This model comprises a pyramidal three-layer LSTM structure that simultaneously models smoke concentration, plume velocity, and plume temperature at two different detection points at varying heights. The first two layers consist of two structurally identical blocks, each corresponding to one height. A novel LSTM structure with a triple cross-fusion modified LSTM unit is designed, with two instances of this structure used as the first layer. Two input-weighted LSTM structures are used as the second layer to capture the temporal and cross-correlation features of smoke concentration, temperature, and plume velocity data at their respective heights. The third LSTM layer further aggregates the outputs from the respective blocks corresponding to each height, extracting the global temporal features of spatial correlation patterns among different heights. After network training with data collected from fire tests, the model effectively achieves implicit modeling of the triple correlation of early smoke for detecting weak sensing signals.
The contents of this paper are organized as follows: The second section introduces relevant work related to the proposed method. The third section provides a detailed description of our multi-height and heterogeneous sensor fusion discriminant method. The fourth section verifies the effectiveness and potential application of the proposed method through experiments. The fifth section presents a discussion and analysis of the results. The sixth section provides a summary of the entire paper.

2. Related Works

2.1. Recurrent Neural Network

Recurrent neural networks (RNNs) are a type of neural network that processes sequence data by recursively evolving through the sequence. Research on RNNs began in the 1980s and 1990s, and they became prominent deep learning algorithms in the early 21st century [25]. Common types of RNNs include bidirectional recurrent neural networks (Bi-RNNs) [26] and long short-term memory networks (LSTMs) [27]. RNNs have the ability to learn the nonlinear characteristics of sequences through memory, parameter sharing, and Turing completeness, making them advantageous in various fields such as natural language processing (NLP), speech recognition, language modeling, machine translation, time series forecasting, and computer vision [28]. In our work, we selected LSTM, known for its exceptional time series feature learning capabilities, to analyze temporal information from sensors to determine the occurrence of a fire.

2.2. Long Short-Term Memory Networks

For a time series dataset x t at the t -th second, the standard LSTM can be used to learn a sequence of hidden states h t to describe the dynamics at this moment. The standard LSTM unit consists of three gates and two states: the input gate, output gate, forget gate, hidden state, and cell state.
Input Gate: Data from the input layer first passes through the input gate, which determines whether any information will be input to the memory cell at this moment.
Output Gate: This gate determines whether information is output from the memory cell at any time.
Forget Gate: This gate controls the level of forgetting for the previous state.
Hidden State: The final output of the LSTM unit corresponds to long-term memory in long short-term memory (LSTM).
Cell State: This state represents cell state information, corresponding to short-term memory in long short-term memory (LSTM).
i t = σ W x i x t + W h i h t 1 + b i ,
f t = σ W x f x t + W h f h t 1 + b f ,
o t = σ W x o x t + W h o h t 1 + b o ,
g t = ϕ W x c x t + W h c h t 1 + b c ,
c t = f t c t 1 + i t g t ,
h t = o t ϕ c t ,
where i t , f t , o t , g t , c t , and h t represent the outputs of the input gate, output gate, forget gate, hidden state, and cell state at time t, respectively. W * represents the corresponding weight matrix, b * represents the corresponding bias, σ represents the sigmod activation function, ϕ represents the hyperbolic tangent tanh(), and represents an element-wise product.

3. Proposed Method

3.1. Framework

We propose the concept of detection point pairs for large spaces with high ceilings. This concept expands the traditional single-point sensor to two sets of sensors distributed vertically. Due to the rise of hot air during combustion, plume velocity is used along with smoke concentration and temperature for early fire detection in each set of sensors. Based on this, the framework of our model is designed as illustrated in Figure 1.
The model is divided into three layers. The temporal fusion layer has two blocks corresponding to two different heights. In each block, there are three modified LSTM structures corresponding to three different sensors, in which the information processing logic of each unit in each structure is designed as follows:
D t i = D _ L S T M x t i , D t 1 i , D t 1 j , D t 1 k ,
where x t i is the data collected by the i -th sensor at time t and D t i represents the output of the modified LSTM unit corresponding to the i -th sensor at time t .
In the sensor fusion layer, the operation process at time t can be expressed as
R t = L S T M i = 1 n β i D t i , R t 1 ,
where R t represents the output of the standard LSTM unit at time t , n represents the total number of all types of sensors, and β i is the sensor fusion weight.
In the height fusion layer, the information from the two blocks is fused as
F t = L S T M i = 1 h R t i , F t 1 ,
where F t represents the output of the standard LSTM unit at time t, and h represents the total number of sampled heights, which is specified as 2 in this work.

3.2. Temporal Fusion Layer

The long short-term memory (LSTM) model has proven effective in learning time series features of a single variable. However, relying on a single factor for fire detection can easily result in false alarms. Therefore, it is essential to consider multiple factors and their interrelationships comprehensively. To address this, we propose a novel structure with three modified LSTM units that cross-fuse within the temporal fusion layer. This design learns both the time series characteristics of individual parameters and their relationships with other parameters. Notably, although this layer primarily fuses temporal information from different moments, it also implicitly integrates data from different sensors, as it considers current sensor data in relation to past sensor data from other sensors. The specific structure of the temporal fusion layer unit is illustrated in Figure 2.
At this layer, as shown by Equation (14), we consider both sensor amplitude signals and differential signals, and the input Χ t to the temperature LSTM unit, which contains the sensor values at the current moment, as well as the sensor differential signals at different time scales; in Equation (14), we give examples for nine sampling time spans.
In the triple cross-fusion structure, consideration of the cell status of adjacent units is added, which better integrates other influencing factors in the environment. Parameter α t * denotes the degree of importance, obtained by Equation (16); In Figure 2, ⊗ indicates matrix multiplication, ⊕ indicates matrix addition, and ● indicates multiplication between constants and matrices.
The calculation processes of the triple cross-fusion unit are as follows:
i t * = σ W x i * Χ t * + W h i * h t 1 * + b i * ,
f t * = σ W x f * Χ t * + W h f * h t 1 * + b f *
o t * = σ W x o * Χ t * + W h o * h t 1 * + b o *
g t * = ϕ W x c * Χ t * + W h c * h t 1 * + b c *
The * symbol represents T , S and P respectively refer to the calculation process in LSTM unit utilized to calculate temperature, smoke concentration and plume velocity; W x i * , W x f * , W x o * , W x c * , W h i * , W h f * , W h o * , and W h c * represent the weight matrices; b i * , b f * , b o * , and b c * represents bias; σ represents the sigmod activation function; and ϕ represents the hyperbolic tangent tanh().
Χ t T = x t T , x t T x t 1 T , x t T x t 2 T , . . . , x t T x t 9 T ,
For adjacent triple cross-fusion units, when calculating unit status, we consider the influence of unit status between adjacent triple cross-fusion units:
c t * = f t * c t 1 * + i t * g t * + m T , S , P m * α t m c t m ,
where c t m indicates the status of units in adjacent units, α t m indicates the weight of the unit status in adjacent units, and represents an element-wise product. For several adjacent units, weight α t m is calculated as follows:
α t m = e x t m   κ T , S , P e x t κ ,
Finally, the output of the triple cross-fusion unit is
h t * = o t * ϕ c t * ,
where h t * indicates the output of the triple cross-fusion unit for learning the * sensor (represents temperature/smoke concentration/plume velocity) temporal features at time t.

3.3. Sensor Fusion Layer

As shown in the Figure 3, in the sensor fusion layer, we use a standard LSTM unit to fuse the outputs from the temporal fusion layer, using the linear weighting of h t * of the triple cross-fusion units as the input x t F to the standard LSTM unit according to Equation (8), this is equivalent to employing a linear layer bridging the temporal fusion layer and the sensor fusion layer.

3.4. Height Fusion Layer

Based on the laws of combustion of substances and experimental data, it is expected that smoke, temperature, and plume velocity will show an increasing trend during the combustion of substances. Therefore, highly similar trends in data collected at multiple heights are highly correlated with the occurrence of the combustion of an object.
Therefore, we sum the outputs of the sensor fusion layer at different heights following Equation (9) to obtain the input x t F for the height fusion layer units.
Finally, the output of the height fusion layer is obtained by Equation (18).
y t = e x p ( F C F t )   i = 1 2 e x p ( F C F t ) ,
where a fully connected function ( F C ) is utilized to convert F t into a vector and then the softmax function is adopted to obtain the alarm result at this time.
The specific situation of aggregation of sensor fusion layer data at the height fusion layer is shown in Figure 4.

4. Experiments

4.1. Production of Datasets

To evaluate the early fire signal modeling capability of our proposed model, we conducted fire tests based on the requirements for early detection in large spaces with high ceilings, as shown in Figure 5. The internal space, measuring 100 m in length and 30 m in width, is divided into high and low areas. The high area has a height of 30 m, while the low area is 18 m high. The windows of the entire space were closed, with a natural ventilation outlet at the top. This building, with a ceiling height exceeding 12 m, fits the category of a typical tall space [29,30]. Three data sampling heights were set at 12.5 m, 15.5 m, and 18.5 m to represent possible sensor installation points. We used two fire sources to collect data: the ISO 7240-9 cotton rope smoldering fire and the polyurethane flame fire [31]. These fires, being common test fires for evaluating fire detection method response performance, are small in scale and fully representative of early fire conditions.
We collected data from these two fire sources to train and validate the model. Sampling points were placed above the fire sources, using point-type photoelectric sensors, copper–nickel–copper thermocouples, and thermal air velocity probes to gather three types of sensing data at 1 Hz at each height. Smoke concentration was measured by the ADC-converted value of the photoelectric scattering intensity from the point-type photoelectric sensor. A set of examples of different types of sensor signals at different heights is given in Appendix A. We classified data collected during the combustion phase as positive samples, while data from other states, including daily conditions and dusty situations, were considered negative samples. This resulted in a dataset of 35,960 samples, with 18,920 alarm data and 17,980 non-alarm data. The dataset was divided into a training set and a validation set. The validation set was further subdivided into two parts: the first part was from the same batch of fire trials as the training set, while the second part was from a different batch of fire tests. Before data were input into the network model, background tracking extraction and background subtraction were carried out first. The extracted foreground data were standardized through maximum normalization, and the dimensioned expression was changed into the dimensionless expression, so as to ensure that the data of different sensors have the same scale and distribution and improve the performance and generalization ability of the model.

4.2. Implementation Details

The proposed model was built on a PyTorch toolbox and implemented on a computer cluster with Nvidia A 2000 (12G) GPU. We employed the Adam algorithm [32] to optimize the proposed model with the default parameters, namely β 1 = 0.9, β 2 = 0.999, and ε = 0.1 × 10 4 . The learning rate and batch size on the dataset were set to 0.001 and 10, respectively. The number of iterations was set to 50. A total of eight algorithms were tested as baselines alongside our algorithm. These algorithms included Random Forest (RF), Support Vector Machine (SVM) with a radial basis kernel, Logistic Regression (LR), Fully Connected Networks (FCNNs) with three fully connected layers, Convolutional Neural Networks (1D CNNs) with three convolutional layers, Standard LSTM (LSTM), Multilayer LSTM (MLSTM) with three standard LSTM layers [23], and an Environmental Information Fusion LSTM model (EIFLSTM) [24]. All algorithms were optimized by adjusting their hyperparameters.
For the models RF, LR, SVM, FCNN, 1D CNN, LSTM, and MLSTM, the input vectors are concatenated from different sensor data or multi-sensor data at various heights. Although different datasets will affect the weights within the model or network matrix dimensions, the model structure remains unchanged. In contrast, for EIFLSTM and our proposed model, the network structure must be pruned according to the type of input data when the model works on a single height and single sensor type dataset. These networks do not process individual data by concatenating them but have different structures for different types of sensor data. When specific sensor data are unavailable, the corresponding structure is pruned. For EIFLSTM, the model width varies depending on whether one, two, or three sensor types are used, while data from different heights are concatenated and processed together. For our proposed algorithm, both the number of heights and sensor types affect the network structure. With a single height, the network width narrows; with two sensors, the triple fusion structure reduces to a double fusion structure; and with one sensor, the entire network simplifies to a two-layer LSTM structure.

4.3. Results

We initially tested the algorithm’s fire state identification capability using the first fire validation dataset. Both the single-sensor data discrimination and multi-sensor data discrimination performances were evaluated. The results are presented in Table 1 and Table 2. Table 1 shows the accuracy of algorithms using sensing data at a single height, while Table 2 displays the accuracy of algorithms using multi-height sensing data combinations at 12.5–15.5 m, 12.5–18.5 m, and 15.5–18.5 m. The initials T (temperature), S (smoke), and P (plume velocity) denote the corresponding sensing data and their combinations. Additionally, we tested the algorithms on the second validation dataset. Table 3 presents the accuracy results for the single-height set, and Table 4 shows the accuracy results for the multi-height set. These experiments demonstrate the algorithms’ ability to accurately identify the fire state at any given moment.
We selected the average result of the above experiment; divided several types of algorithms into LSTM, traditionally machine learning, and other deep learning classes for comparison; and plotted visualization charts, as shown in Figure 6, to provide more indicative visual representation results. The results in Figure 6 show that the detection method based on LSTM achieved the best results in all four tables, confirming the effectiveness of the temporal features of sensor data for detecting fires in large space buildings.
Additionally, we conducted experiments to evaluate the timeliness of fire alarms using various algorithms. We collected time-series data from the ignition in these validation sets at the height combination of 12.5 m and 18.5 m, as the average accuracy of all algorithms was highest at this combination in the first validation set. Alarm judgments were made sequentially, employing a common fire detector alarm logic: an alarm is triggered when the algorithm identifies a fire state in more than a specific proportion of 10 consecutive judgments, with the alarm time measured from the moment of ignition. In addition to the nine algorithms tested in previous experiments, we also evaluated the threshold judgment method for single sensing data of smoke concentration or temperature, representing the most commonly used fire detectors today. For smoke concentration, we set 30% of the full scale at 8-bit sampling, i.e., 80, as the smoke alarm threshold, which aligns with common point-type photoelectric smoke detector products with response thresholds ranging from 0.2 to 0.5 dB/m. For temperature, we selected a temperature rise rate of 10 °C/min as the temperature alarm threshold, reflecting the most sensitive scenario for A1R-type point-type temperature-sensing fire detectors, as specified in ISO 7240-5 [33]. An alarm is triggered whenever data exceed the threshold value at any height for these two traditional threshold methods. The test results are shown in Figure 7, where SD denotes a smoke detector and HD denotes a heat detector.
An image-based fire detector was installed at a height of 12 m on the sidewall of the experimental space. In the experiment, we obstructed its lower field of view, andsmoke diluted when it reached the visible range, as shown in Figure 8. (a) represents the thin smoke under smoldering conditions, and (b) represents the smoke generated under open flame combustion. In the face of such a thin level of smoke, this image-based fire detector did not raise an alarm.

5. Discussion

Overall, our proposed algorithm achieved the highest accuracy in most individual tests and the highest average accuracy across all tests. In the single-height single-sensor test, represented by T, S, and P in Table 1, our algorithm did not demonstrate a significant advantage, maintaining accuracy similar to that of other algorithms. This is attributed to the degradation of the algorithm’s performance due to pruning of the network structure in the single-height single-sensor case. In the subsequent multi-height fusion, multi-sensor fusion, and multi-height multi-sensor fusion tests in Table 1, Table 2 and Table 3, our proposed algorithm consistently achieved the highest accuracy.
Figure 9 illustrates the average accuracy improvements of different models on the first validation set as sensor types and sampling heights increase. The data indicate that all algorithms exhibit an increase in average accuracy, highlighting the benefits of using multiple sensor types and varying sampling heights for early fire detection. Excluding the RF and FCNN models, which show the highest accuracy improvement due to their initially low average accuracy, our proposed algorithm achieves the highest accuracy improvement among the seven models, with comparable initial accuracy. This underscores the effectiveness of our fusion structure. Furthermore, compared to various LSTM variants, our algorithm demonstrates significant performance improvement, suggesting that cross-fusion between multiple sensors is advantageous.
Figure 10 shows the average accuracy gain of different algorithmic models on the second validation set compared to the first validation set. The first validation set is sampled from the same fire test as the training data, assuming they share the same distribution. In contrast, the second validation set is sampled from different fire tests. Despite consistent fuel and combustion processes, the randomness in room air conditions and smoke dispersion leads to distribution differences between the two sets. On the second validation set, the average accuracy of the models decreases. Notably, our proposed algorithm experiences the least accuracy reduction, with the EIFLSTM and LR models following closely. Other algorithms show a greater degree of degradation. The LR model, a generalized linear regression model, captures more general patterns, resulting in moderate performance on the first validation set but minimal degradation on the second. Our proposed model and the EIFLSTM model incorporate differential information, leading to lower accuracy degradation. This highlights the importance of differential information in identifying different fire source data, which other models fail to process effectively. Unlike the EIFLSTM algorithm, our proposed model utilizes multi-scale temporal differential information, rather than only considering differences between adjacent sampling times.
Our model achieved the shortest average alarm times in both the smoldering and flame fire tests. In the smoldering fire tests, the smoke detector triggered an alarm, whereas the heat detector did not. In the flame fire tests, neither traditional detector signaled an alarm, highlighting the limitations of threshold-based methods for detecting weak signals in smoke and temperature variables. All machine learning algorithms successfully triggered alarms. Overall, temporal models outperformed non-temporal models, likely due to their ability to capture underlying trends in past signals. Notably, the EIFLSTM and our proposed model, both of which incorporate differential information, ranked highest in alarm time performance.
Our experiments demonstrate that the proposed algorithm, which integrates temporal variations across different sensors and patterns at varying heights while maintaining a concise structure that considers both scaled and differential signals, responds effectively to weak signals in the early stages of a fire and achieves superior performance in tests. However, as a machine learning method, its performance is inherently data-driven. Consequently, when the dataset does not adequately cover various data distribution scenarios, the model’s generalization performance deteriorates, as seen with the second validation set. Physical fire test data collection is costly and inherently limited, making it challenging to obtain comprehensive datasets. Therefore, a key improvement for future work involves using numerical simulations of fire dynamics to generate datasets for smoldering and flame fires under various conditions via virtual scenarios. This approach aims to cover all possible data distributions, with transfer learning strategies used to calibrate the model’s performance in real-world settings. Additionally, the current manual extraction of differential information could be enhanced by designing a network structure that automates this extraction process.

6. Conclusions

In this paper, we propose a method for detecting early weak fire signals. Our approach models smoke, temperature, and air plume velocity using a cross-LSTM cell structure, implicitly considering the cross-timing relationships between sensors. This method employs hierarchical LSTM fusion to integrate the results from different sensor types and heights. We tested its performance on both smoldering and flame fires, comparing it with conventional fire detection methods; traditional machine learning algorithms; and various deep learning techniques, including LSTM and its variants. The experimental results demonstrate the effectiveness of our proposed network structure in state discrimination using multiple weak sensing signals and highlight its potential application in buildings with high ceilings. Future improvements will focus on enhancing the dataset through a combination of numerical simulations and physical experiments, as well as developing automated network structures with multi-scale time-series features.

Author Contributions

Conceptualization, method frameworks, fire experiments, L.W. and B.L.; guidance, software: X.Y., J.C., and B.L.; original draft preparation, L.W.; review and editing, B.L. and X.Y.; project administration, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, 2021YFC3001603, and Fundamental Research Funds for Central Non-profit Scientific Institution, CA518.

Data Availability Statement

The data are available from the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1 and Figure A2 give a set of examples of different types of sensor signals at different heights. It can be seen that the data of the three types of sensors in polyurethane open flame fire test show similar trend changes. Although not as significant as in the open flame fire test, there is also a correlation between smoke concentration and plume velocity in the cotton rope smoldering fire test. The smoke temperature of the smoldering fire is special because the power of the smoldering fire in our test was too small, so it did not generate enough heat, and therefore, the smoke plume was already at the same temperature as the ambient temperature when it reached a high altitude; therefore, the change in smoke temperature of the smoldering fire shown in Figure A2 should generally reflect a downward trend of ambient temperature. However, smoldering fire may also slow this downward process, and data-driven algorithms can mine for hidden correlations that are not significant.
Figure A1. A set of examples of different types of sensor signals at different heights of the open flame fire test.
Figure A1. A set of examples of different types of sensor signals at different heights of the open flame fire test.
Electronics 13 02572 g0a1
Figure A2. A set of examples of different types of sensor signals at different heights of the smoldering fire test.
Figure A2. A set of examples of different types of sensor signals at different heights of the smoldering fire test.
Electronics 13 02572 g0a2

References

  1. Kodur, V.; Kumar, P.; Rafi, M.M. Fire Hazard in Buildings: Review, Assessment and Strategies for Improving Fire Safety. PSU Res. Rev. 2019, 4, 1–23. [Google Scholar] [CrossRef]
  2. Liu, X.; Liu, X.; Zhang, T. Influence of Air-Conditioning Systems on Buoyancy Driven Air Infiltration in Large Space Buildings: A Case Study of a Railway Station. Energy Build. 2020, 210, 109781. [Google Scholar] [CrossRef]
  3. Lin, X.; Zhang, J.; Du-Ikonen, L.; Zhong, W. An Infiltration Load Calculation Model of Large-Space Buildings Based on the Grand Canonical Ensemble Theory. Energy 2023, 275, 127331. [Google Scholar] [CrossRef]
  4. Fafet, C.; Mulolli Zajmi, E. Qualitative Fire Vulnerability Assessments for Museums and Their Collections: A Case Study from Kosovo. Fire 2021, 4, 11. [Google Scholar] [CrossRef]
  5. Hsu, M.-W.; Wu, C.-M.; Chen, Y.-K.; Lee, S.-K. A Numerical Study of the Heat Release Rate of Exhibition Kiosks in a Fire. Sens. Mater. 2017, 29, 453–460. [Google Scholar] [CrossRef]
  6. Wan, Z.; Zhou, T.; Tang, Z.; Pan, Y.; Zhang, L. Smart Design for Evacuation Signage Layout for Exhibition Halls in Exhibition Buildings Based on Visibility. ISPRS Int. J. Geo-Inf. 2021, 10, 806. [Google Scholar] [CrossRef]
  7. Hou, J.; Qian, J.; Zhang, W.; Zhao, Z.; Pan, P. Fire Detection Algorithms for Video Images of Large Space Structures. Multimed. Tools Appl. 2011, 52, 45–63. [Google Scholar] [CrossRef]
  8. Wong, A.K.K.; Fong, N.K. Experimental Study of Video Fire Detection and Its Applications. Procedia Eng. 2014, 71, 316–327. [Google Scholar] [CrossRef]
  9. Khan, F.; Xu, Z.; Sun, J.; Khan, F.M.; Ahmed, A.; Zhao, Y. Recent Advances in Sensors for Fire Detection. Sensors 2022, 22, 3310. [Google Scholar] [CrossRef]
  10. Kuffner, W.; Hadjisophocleous, G. Method of Determining Smoke Detector Spacing in High Ceiling Applications. Fire Technol. 2014, 50, 723–744. [Google Scholar] [CrossRef]
  11. Višak, T.; Baleta, J.; Virag, Z.; Vujanović, M.; Wang, J.; Qi, F. Multi Objective Optimization of Aspirating Smoke Detector Sampling Pipeline. Optim. Eng. 2021, 22, 121–140. [Google Scholar] [CrossRef]
  12. Yao, Y.; Tian, Y.; Cheng, P. Study on Calibration Method of Filter for Linear Beam Smoke Detector. In Proceedings of the AOPC 2022: Optoelectronics and Nanophotonics, Beijing, China, 18 December 2022; Volume 12556, p. 125560M. [Google Scholar]
  13. Li, B.; Xu, F.; Li, X.; Yu, C.; Zhang, X. Early Stage Fire Detection System Based on Shallow Guide Deep Network. Fire Technol. 2024, 60, 1803–1821. [Google Scholar] [CrossRef]
  14. Li, P.; Zhao, W. Image Fire Detection Algorithms Based on Convolutional Neural Networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  15. Rahmatov, N.; Paul, A.; Saeed, F.; Seo, H. Realtime Fire Detection Using CNN and Search Space Navigation. J. Real-Time Image Proc. 2021, 18, 1331–1340. [Google Scholar] [CrossRef]
  16. Avila-Avendano, C.; Pintor-Monroy, M.I.; Ortiz-Conde, A.; Caraveo-Frescas, J.A.; Quevedo-Lopez, M.A. Deep UV Sensors Enabling Solar-Blind Flame Detectors for Large-Area Applications. IEEE Sens. J. 2021, 21, 14815–14821. [Google Scholar] [CrossRef]
  17. Huang, Y.; Wang, E.; Bie, Y. Simulation Investigation on the Smoke Spread Process in the Large-Space Building with Various Height. Case Stud. Therm. Eng. 2020, 18, 100594. [Google Scholar] [CrossRef]
  18. Wang, R.; Lan, X.; Xu, L. Smoke Spread Process under Different Heights Based on Numerical Simulation. Case Stud. Therm. Eng. 2020, 21, 100710. [Google Scholar] [CrossRef]
  19. Xu, L.; Wang, Y.; Song, L. Numerical Research on the Smoke Spread Process of Thin-Tall Atrium Space under Various Ceiling Height. Case Stud. Therm. Eng. 2021, 25, 100996. [Google Scholar] [CrossRef]
  20. Drysdale, D. An Introduction to Fire Dynamics, 3rd ed.; John Wiley & Sons, Ltd.: Chichester, UK, 2011; ISBN 978-0-470-31903-1. [Google Scholar]
  21. Zhang, Y.; Liu, Z.; Lin, Y.; Fu, M.; Chen, Y. New Approaches to Determine the Interface Height of Fire Smoke Based on a Three-Layer Zone Model and Its Verification in Large Confined Space. Fire Mater. 2020, 44, 130–138. [Google Scholar] [CrossRef]
  22. Zhang, G.; Li, H.; Zhu, G.; Li, J. Temperature Fields for Fire Resistance Analysis of Structures Exposed to Natural Fires in Large Space Buildings. Struct. Des. Tall Spec. Build. 2020, 29, e1708. [Google Scholar] [CrossRef]
  23. Cao, X.; Wu, K.; Geng, X.; Guan, Q. Field Detection of Indoor Fire Threat Situation Based on LSTM-Kriging Network. J. Build. Eng. 2024, 84, 108686. [Google Scholar] [CrossRef]
  24. Liu, P.; Xiang, P.; Lu, D. A New Multi-Sensor Fire Detection Method Based on LSTM Networks with Environmental Information Fusion. Neural Comput. Appl. 2023, 35, 25275–25289. [Google Scholar] [CrossRef]
  25. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  26. Schuster, M.; Paliwal, K.K. Bidirectional Recurrent Neural Networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
  27. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  28. Liu, B.; Lane, I. Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling. Proc. Interspeech 2016, 685–689. [Google Scholar] [CrossRef]
  29. NFPA (The National Fire Protection Association). National Fire Alarm and Signaling Code; NFPA 72; NFPA: Quincy, MA, USA, 2022. [Google Scholar]
  30. Ministry of Housing and Urban-Rural Development of People’s Republic of China. Code for Design of Automatic fire Alarm System; GB 50116; Planning Press: Beijing, China, 2013.
  31. ISO 7240-9; ISO/TC 21/SC 3. Fire Detection and Alarm Systems—Part 9: Test Fires for Fire Detectors. ISO (International Organization for Standardization): Sydney, Australia, 2022.
  32. Kingma, D.P.; Adam, L.J.B. A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7 May 2015. [Google Scholar]
  33. ISO 7240-5; ISO/TC 21/SC 3. Fire Detection and Fire Alarm Systems—Part 5: Point Type Heat Detectors. ISO (International Organization for Standardization): Sydney, Australia, 2022.
Figure 1. The framework of the multi-height and heterogeneous sensor fusion discriminant model.
Figure 1. The framework of the multi-height and heterogeneous sensor fusion discriminant model.
Electronics 13 02572 g001
Figure 2. Construction of the temporal fusion layer unit by taking the temperature LSTM unit as an example.
Figure 2. Construction of the temporal fusion layer unit by taking the temperature LSTM unit as an example.
Electronics 13 02572 g002
Figure 3. Sensor fusion layer unit.
Figure 3. Sensor fusion layer unit.
Electronics 13 02572 g003
Figure 4. Height fusion layer unit.
Figure 4. Height fusion layer unit.
Electronics 13 02572 g004
Figure 5. Experimental setup for dataset collection.
Figure 5. Experimental setup for dataset collection.
Electronics 13 02572 g005
Figure 6. Average accuracy of different algorithms: (a) accuracy of first val-set of single-height data; (b) accuracy of first val-set of multi-height data; (c) accuracy of second val-set of multi-height data; (d) accuracy of second val-set of multi-height data.
Figure 6. Average accuracy of different algorithms: (a) accuracy of first val-set of single-height data; (b) accuracy of first val-set of multi-height data; (c) accuracy of second val-set of multi-height data; (d) accuracy of second val-set of multi-height data.
Electronics 13 02572 g006
Figure 7. The average alarm time of the models: (a) smoldering fire tests; (b) flame fire tests.
Figure 7. The average alarm time of the models: (a) smoldering fire tests; (b) flame fire tests.
Electronics 13 02572 g007
Figure 8. The performance of image-based fire detectors in the presence of thin smoke: (a) smoke under the smoldering condition; (b) smoke under the open flame combustion condition.
Figure 8. The performance of image-based fire detectors in the presence of thin smoke: (a) smoke under the smoldering condition; (b) smoke under the open flame combustion condition.
Electronics 13 02572 g008
Figure 9. Average accuracy improvement gain of different models on the first validation set with increasing sensor types and with increasing number of sampled heights.
Figure 9. Average accuracy improvement gain of different models on the first validation set with increasing sensor types and with increasing number of sampled heights.
Electronics 13 02572 g009
Figure 10. Gain of average accuracy of different algorithms on the second validation set compared to the first validation set.
Figure 10. Gain of average accuracy of different algorithms on the second validation set compared to the first validation set.
Electronics 13 02572 g010
Table 1. Discrimination accuracy of the first validation set of single-height sensing data.
Table 1. Discrimination accuracy of the first validation set of single-height sensing data.
ModelsAccuracy (%) of 12.5 mAccuracy (%) of 15.5 mAccuracy (%) of 18.5 mAverage Accuracy (%)
TSPTSSPTPTSPTSPTSSPTPTSPTSPTSSPTPTSP
RF83.7284.3669.7986.2888.483.1992.3483.1973.6277.4582.2390.3281.2891.4982.7776.3880.1187.5585.2184.0488.5183.44
SVM82.4582.6673.1982.6685.5383.383.382.3472.8775.9681.4980.7482.1382.1380.3277.4582.1386.686.2882.4587.6681.6
LR80.2180.4372.8780.4380.9683.0979.5780.2172.8772.8780.6472.8782.0282.0280.2177.0272.1382.4576.2882.0286.4978.94
FCNN82.2383.5172.5585.4387.5583.490.3280.2173.377.9883.7286.2882.9888.1980.2176.1781.685.5385.1184.6887.5582.79
LSTM80.5274.948482.5780.3685.583.5280.5274.9484.0482.4782.3885.683.4980.5274.9483.9482.581.2685.5784.3981.8
1D CNN80.2579.1584.1286.2785.484.987.580.2575.2383.7186.7285.0483.387.3680.2578.9783.9986.5485.1785.2287.583.66
MLSTM80.5274.9484.1382.5485.3885.5786.4980.5277.8784.282.5474.9485.6687.280.5278.6184.1682.4185.0685.786.1482.62
EIFLSTM83.4785.9484.2386.9288.4686.5293.2184.9879.9484.7486.8287.9786.6792.283.3480.9484.4988.2186.8786.2888.9286.24
Ours84.0384.2384.1889.1591.6589.0294.0684.3380.4384.1689.1990.0489.7693.2682.9780.7684.4389.7589.4388.1291.0487.33
Table 2. Discrimination accuracy of the first validation set of multi-height sensing data.
Table 2. Discrimination accuracy of the first validation set of multi-height sensing data.
ModelsAccuracy (%) of 12.5 m, 15.5 mAccuracy (%) of 12.5 m, 18.5 mAccuracy (%) of 15.5 m, 18.5 mAverage Accuracy (%)
TSPTSSPTPTSPTSPTSSPTPTSPTSPTSSPTPTSP
RF83.5187.2382.3491.9196.4985.1197.2383.8387.5582.9891.796.2887.7797.6684.2681.8185.5389.6896.4987.2395.9689.17
SVM82.3485.6475.9687.3491.3883.6288.8382.3485.6484.1586.8190.6483.6289.0482.1377.4582.0286.3889.0482.8787.7785
LR81.9181.3873.6281.0681.4983.0980.6482.4583.7277.3486.8183.6283.385.7481.9176.9176.9186.1781.782.4587.0281.87
FCNN82.4587.7782.6688.9493.5185.7494.4782.5585.1185.9686.0692.5586.794.6882.0278.0985.8587.0294.0488.494.4787.57
LSTM80.5283.9779.4784.9985.1284.6185.5783.5985.5785.8986.8187.9987.2988.9280.5278.2685.3483.0587.8785.6684.4884.55
1D CNN83.5383.3580.3488.5587.5984.3190.9283.3585.5884.4488.0990.1986.4190.3782.8978.8385.1786.5989.685.3689.3285.94
MLSTM83.4983.7281.2983.9784.6184.6185.3183.1485.0986.7887.187.3687.2986.8580.5278.7485.4784.288.8686.2488.784.92
EIFLSTM83.9989.2183.3292.0696.6585.1297.6584.4286.587.8192.799787.7998.5284.7982.9486.8989.9996.9788.7996.4989.98
Ours84.5690.7884.5793.2997.4787.4398.1486.1988.2689.1393.8298.1589.9398.7186.1683.288.4792.8797.9891.6798.1591.38
Table 3. Discrimination accuracy of the second validation set of single-height sensing data.
Table 3. Discrimination accuracy of the second validation set of single-height sensing data.
ModelsAccuracy (%) of 12.5 mAccuracy (%) of 15.5 mAccuracy (%) of 18.5 mAverage Accuracy (%)
TSPTSSPTPTSPTSPTSSPTPTSPTSPTSSPTPTSP
RF67.9151.3868.8662.2955.9969.1458.4867.5658.8350.1265.6136.8467.8835.6967.5659.3851.1727.5428.7750.4728.2853.8
SVM67.750.5168.3767.8849.3568.468.467.5668.0567.3267.8142.0168.1268.1667.8453.5859.2528.2828.171.4128.1758.39
LR71.5851.1468.1663.3751.0768.3760.4371.7268.0568.371.3768.2668.1968.1971.7250.2371.4171.2354.7468.8945.464.37
FCNN67.6350.1969.9862.2559.3568.460.8271.7257.9950.4766.9733.4569.9833.3471.7232.9656.5230.8628.757.9230.4853.89
LSTM71.7268.0566.7672.8169.7066.9367.671.7268.0566.4573.2366.4566.8668.9371.7268.0565.8273.0570.4366.7668.5469.03
1D CNN71.7268.9365.2971.1366.4166.3167.8171.7268.0564.772.1466.7667.8468.9371.7268.4765.8271.966.0366.9767.668.39
MLSTM71.7268.8965.6473.1966.965.3669.8771.7268.1664.1773.0568.0564.6367.3571.7268.2666.6672.0465.3362.6771.7668.44
EIFLSTM73.7670.8271.4273.9570.171.7270.4372.0269.9169.673.9571.0270.171.7272.269.747272.7271.3772.0671.8571.55
Ours76.1274.0474.0376.4375.8575.1974.7673.9772.8573.2876.0574.1475.874.8275.3973.9775.7375.375.4176.675.8875.29
Table 4. Discrimination accuracy of the second validation set of multi-height sensing data.
Table 4. Discrimination accuracy of the second validation set of multi-height sensing data.
ModelsAccuracy (%) of 12.5 m, 15.5 mAccuracy (%) of 12.5 m, 18.5 mAccuracy (%) of 15.5 m, 18.5 mAverage Accuracy
(%)
TSPTSSPTPTSPTSPTSSPTPTSPTSPTSSPTPTSP
RF67.6353.4849.1161.4543.3155.5445.367.8158.9762.9566.5264.5663.4766.5567.2553.258.0260.0539.6762.3251.9758.05
SVM67.675167.3267.7441.3868.3766.8367.751.1767.4967.7765.7169.1769.5667.6768.2365.7867.7454.3968.2368.3764.25
LR67.8452.4667.5366.5953.0668.4463.5866.6960.5766.3166.1356.6669.8764.1466.3867.2171.9366.4569.9870.0168.4765.25
FCNN67.752.3646.5267.651.4956.5247.6166.9760.2264.5267.1464.8762.1567.1466.956.7662.3260.7846.9163.3349.8859.51
LSTM71.7253.5554.7472.7755.1968.3358.967.3254.4266.169.2156.966.5265.7871.7268.1261.971.7245.2666.7663.5463.36
1D CNN67.651.8752.7458.8654.1468.4759.3167.4255.5164.9460.5459.7367.8455.0266.5567.8861.9468.8940.1667.6759.9460.81
MLSTM68.3354.2549.9569.4953.6268.4765.5467.4954.5665.0168.1260.5466.6668.8671.7268.1663.3771.7249.4664.4954.5663.07
EIFLSTM72.6954.9568.1270.1265.1569.3366.8668.5861.8168.4671.0670.5770.2669.773.2769.0572.9272.1971.2770.671.7268.98
Ours76.461.171.8274.1369.2873.3470.3372.4265.1872.6475.0574.8173.4374.8276.4174.4376.5875.7475.4174.5576.9473.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Li, B.; Yu, X.; Chen, J. Multi-Height and Heterogeneous Sensor Fusion Discriminant with LSTM for Weak Fire Signal Detection in Large Spaces with High Ceilings. Electronics 2024, 13, 2572. https://doi.org/10.3390/electronics13132572

AMA Style

Wang L, Li B, Yu X, Chen J. Multi-Height and Heterogeneous Sensor Fusion Discriminant with LSTM for Weak Fire Signal Detection in Large Spaces with High Ceilings. Electronics. 2024; 13(13):2572. https://doi.org/10.3390/electronics13132572

Chicago/Turabian Style

Wang, Li, Boning Li, Xiaosheng Yu, and Jubo Chen. 2024. "Multi-Height and Heterogeneous Sensor Fusion Discriminant with LSTM for Weak Fire Signal Detection in Large Spaces with High Ceilings" Electronics 13, no. 13: 2572. https://doi.org/10.3390/electronics13132572

APA Style

Wang, L., Li, B., Yu, X., & Chen, J. (2024). Multi-Height and Heterogeneous Sensor Fusion Discriminant with LSTM for Weak Fire Signal Detection in Large Spaces with High Ceilings. Electronics, 13(13), 2572. https://doi.org/10.3390/electronics13132572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop