Next Article in Journal
Sông Sài Gòn: Extreme Plastic Pollution Pathways in Riparian Waterways
Previous Article in Journal
High-Power Characteristics of Piezoelectric Transducers Based on [011] Poled Relaxor-PT Single Crystals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Acoustic Waves and Their Application in Modern Fire Detection Using Artificial Vision Systems: A Review

by
Jacek Lukasz Wilk-Jakubowski
1,*,
Valentyna Loboichenko
2,3,
Mikhail Divizinyuk
4,
Grzegorz Wilk-Jakubowski
5,6,
Roman Shevchenko
7,
Stefan Ivanov
8 and
Viktor Strelets
9
1
Department of Information Systems, Kielce University of Technology, 7 Tysiąclecia Państwa Polskiego Ave., 25-314 Kielce, Poland
2
Departamento de Ingeniería Energética, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Camino de los Descubrimientos s/n., 41092 Sevilla, Spain
3
Department of Civil Security, Lutsk National Technical University, Lvivska St., 75, 43000 Lutsk, Ukraine
4
Center for Information–analytical and Technical Support of Nuclear Power Facilities Monitoring of the National Academy of Sciences of Ukraine, 34A, Academician Palladin Ave, 03142 Kyiv, Ukraine
5
Institute of Internal Security, Old Polish University of Applied Sciences, 49 Ponurego Piwnika Str., 25-666 Kielce, Poland
6
Institute of Crisis Management and Computer Modelling, 28-100 Busko-Zdrój, Poland
7
Science and Innovation Center, National University of Civil Defence of Ukraine, 94 Chernyshevska Str., 61023 Kharkiv, Ukraine
8
Department of Automation, Information and Control Systems, Technical University of Gabrovo, Hadji Dimitar 4, 5300 Gabrovo, Bulgaria
9
Institute of Public Administration and Research in Civil Protection, Vyshhorodska St., 21, 04074 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(3), 935; https://doi.org/10.3390/s25030935
Submission received: 12 January 2025 / Revised: 23 January 2025 / Accepted: 31 January 2025 / Published: 4 February 2025
(This article belongs to the Section Communications)

Abstract

:
This paper presents information on the propagation patterns of acoustic waves and their practical application, in particular, in modern fire detection methods that use artificial vision systems and video cameras as intelligent sensors. In practice, the use of artificial intelligence allows the detection of flames in places where the use of typical sensors is impossible or severely limited. Such a system can work together with environmentally friendly acoustic flame extinguishing technology as a standalone platform, or it may cooperate with other modules, which is a new approach in the field of fire protection. An analysis shows that the presented eco-friendly methods outperform other methods, with many advantages. In the future, the acoustic method can be used for the monitoring and early detection of fires in factory buildings or objects of high cultural, religious, and historical value, while an acoustic extinguisher equipped with artificial vision systems can be successfully used to extinguish fires.

1. Introduction

Fires are among the dangerous natural and man-made phenomena that humankind has faced throughout its existence. Large-scale forest fires, disruption, and the destruction of urban infrastructure caused by flames lead to economic losses and, often, loss of life [1]. Fires themselves can be primary or secondary in nature, being the main source of danger or the result of another emergency situation, but their result, in any case, is a negative impact on people and the environment [2,3,4]. The known methods of extinguishing fires involve the use of extinguishing agents such as water, gasses, and foams of various compositions. It should be noted that the fire class (A, B, C, D, K, or F) [5,6] affects the fire extinguishing means used for the elimination of fires. In recent years, the orientation of developed countries towards compliance with sustainable development goals has led to the search for environmentally safe fire extinguishing agents, such as fluorine-free fire extinguishing foams, silicon-based foaming agents, gels, and environmentally safe water additives [7,8,9,10,11,12]. The search for environmentally safe materials (fire retardants) is underway [13,14] to ensure the safety of people. Robotic firefighting systems are being developed [15], and their evacuation methods are being optimized [16]. One of the important methods of minimizing the negative impact of fires is to prevent their development with subsequent (if necessary) elimination in early stages. In this direction, the use of acoustic effects in firefighting can be noted. To ensure the acoustic extinguishment of a flame, the location of the sound source close to the flame, standing waves in the premises to extinguish the flame, and the focusing of high-amplitude sound at a distance are used [17]. The possibility of the low-frequency acoustic extinguishing of fires of various organic substances, such as ethanol, has been claimed [18,19]. It may also be possible to use this method for extinguishing fires involving gasoline, kerosene, thinner, liquefied petroleum gas [20], wood, plasterboard, paper, motor oil, and paraffin [21], and there is the possibility of orientation in a smoky room using this method [21] and the identification of the type of fire using ultrasound [22]. The influence of a low-frequency sound stream (200–300 Hz) on flame morphology has been shown [23,24,25], and a criterion for the attenuation of a jet diffusion flame has been proposed [26]. Authors have discussed the need for further research into the features of low-frequency acoustic fire extinguishers to ensure safety and fire extinguishing efficiency [27]. The advantages of using this method in the early stages of fire extinguishing [20,27], when dripping molten material (polyethylene) [28], and when extinguishing small and medium fires [29], as well as the effectiveness of its application when extinguishing fires in space [30], have been observed. The perspective of using the acoustic method in the location of fires in natural ecosystems has been shown, in particular, in the extinguishing of flying sparks and firebrands [31]. In [32,33], machine learning approaches are used to predict the possibility of acoustic fire extinguishing with given parameters. It is proposed to combine the possibilities of the early detection of fires with the help of deep neural networks (on an artificial intelligence platform) with acoustic fire extinguishing, combined into a single fire protection system [34], including the use of mobile robots [35,36,37].
This research focuses on the theoretical features of sound wave propagation and the possibility of using modern imaging techniques to detect flames. This is particularly important when detecting flames in open spaces (traditional sensors have a limited operating range and, therefore, are not usually used to detect flames in open spaces). Moreover, classical sensors are also limited due to environmental conditions. Here is where artificial intelligence comes to aid. When classical sensors are used, there is a sharp drop in efficiency in wide geographical areas. Therefore, video-based fire detection is the most suitable for detecting fires in open spaces outside buildings, i.e., forests, where fire can be spotted up to 10 km away. It should be emphasized that in this type of application (forests), it is not possible to use fire detection methods based on computer vision together with acoustic technology, which is in an early stage of development (the range of acoustic technology is much shorter; at present, it is a few meters) [36,37,38,39,40,41,42,43,44]. However, an undoubted advantage of systems based on image processing using computer techniques is that they can be applied not only to outdoor environments over large geographical areas, but also to indoor environments. In contrast to classical sensors, changes in the environment and ambient conditions generally do not affect their operation. In addition, the use of computer vision-based fire detection methods is also economically justified (low-cost sensors). For these reasons, these techniques are increasingly being used.
In this article, the analyses carried out concern, among other things, a review of the state of the art in current flame detection techniques. For this purpose, the mathematical laws of acoustic wave propagation are considered, and an overview of fire detection techniques and properties for fire detection purposes is provided, amongst other things. A comparison of modern fire detection techniques using artificial vision systems and video cameras, including fire detection categories, is provided. In this sense, based on the mathematical laws and literature review method, the authors present a mathematical model of acoustic temperature control in an isolated room, the state of the art in this topic based on the latest research, combining both parts (concise review of computer vision flame detection techniques and flame extinguishing after flame detection). Consequently, the aim of this paper is to theoretically study the laws of acoustic wave propagation in a room and the factors influencing it, as well as to propose an extinguishing system with camera-based fire detection and an acoustic extinguishing module. Practical research in this field is carried out under European cooperation. As indicated above, the possibility of equipping the acoustic extinguisher with an intelligent module is a scientific novelty (such systems can work together). The authors emphasize the development of algorithms for the control of autonomous acoustic fire extinguishers. They present the possibility of flame detection (considering currently known techniques and algorithms) and flame extinguishing using novel environmentally friendly methods (including the acoustic technique) [45,46,47,48]. However, acoustic waves also have many other applications [49,50,51].
The main contribution of the authors is the mathematical model of acoustic temperature control in an isolated room, and the detection of fires and outbreaks by means of pulse acoustic probing, as well as the combination of an acoustic high-power fire extinguisher with modern approaches to the use of deep neural networks. In this regard, the article also includes a review of studies by other researchers. The solutions are evaluated and compared to other applied methods in terms of their benefits. The information presented seems to be essential, strong, and helpful in the development of an eco-friendly and safe concept for human acoustic firefighting. An intention of the authors in this aspect is to develop a modern autonomous acoustic fire extinguisher that uses artificial intelligence and can autonomously take decisions.

2. Mathematical Laws of Acoustic Wave Propagation

2.1. The Pattern of Decrease in the Intensity of a Propagating Acoustic Wave

In direct measurements, the value of the speed of sound (in continuous media such as liquids and gasses [52,53]) is obtained directly during the measurements, and when using indirect methods, it is calculated using known formulas in which the values of quantities obtained by direct measurements are substituted. These quantities are functionally related to the calculated speed of sound. For the atmosphere, this is air temperature, humidity, and atmospheric pressure. An obvious advantage of direct methods is that the measurement result not only immediately gives the speed value, but also allows for a qualitative assessment of the change in primary parameters, such as air temperature. Regardless of which direct methods of measuring the speed of sound are used, their essence lies in the emission of an acoustic wave and the subsequent reception of its reflection from a reflector installed at the opposite end of a fixed measuring base. The energy characteristic of direct measurements is determined by the laws of acoustic wave propagation using the parameters of the emitter and receiver of sound, as well as the reflector [54]. The optimal detection of a reflected signal is based on the excess of the level or intensity of the received useful signal I s i g over the level of interference I i n t arriving at the input of the receiving device [55,56]; that is, the following condition must be met:
I s i g δ I i n t = δ P r d K d a   ,
where δ is the recognition coefficient (a dimensionless value that depends on the signal processing methods in the receiving acoustic device).
The right side of the equation is determined by the sensitivity of the receiving device— P r d . This value is reduced by the directional reception of the acoustic receiver by an amount equal to the coefficient of its directional action— K d a . The left side of Equation (1) is determined by the power— P e m   —of the acoustic signal emitted into space. This can be the power of the acoustic pulse (in pulse scanning) or the average radiation power in continuous acoustic radiation. The sound wave propagates according to the spherical law. Its front expands, and the intensity of the wave decreases proportionally to the square of the current distance, that is, proportionally 4 π D 2 . Since the acoustic emitter has a directivity characterized by the coefficient of directional action— K d a , the current value of the intensity of the acoustic wave will be increased by an amount equal to this coefficient.
Propagating in air space, the intensity of the acoustic wave decreases not only due to the expansion of the wave front, but also due to the volume attenuation, the value of which is determined by the volume attenuation coefficient β. Its value depends on the frequency of acoustic oscillations propagating in space, and is found empirically. Therefore, the current value of the sound intensity will be as follows:
P e m K d a 4 π D 2 1 0 0.1 β D k m .
A reflector irradiated by an acoustic wave reflects only a portion of the energy falling on it, which is determined by the area of the reflecting surface. It will be equal to 2 π R r 2 , where R r is the reflector radius. This is the portion of acoustic energy that will be reflected and propagated in the opposite direction.
The reflected wave propagating in the opposite direction will be attenuated due to the expansion of the wave front proportional to the square of the current distance—(4πD)2. Its intensity value will decrease due to volumetric attenuation by the value 1 0 0,1 β D k m (expressed in dB/km).
Then, the intensity of the acoustic wave reflected from the reflector will take the following form:
P e m K d a 4 π D 2 1 0 0.1 β D k m 2 π R r 2 1 0 0.1 β D k m 4 π D 2   .
After performing the transformations, substituting (3) into (1), we obtain the following expression:
P e m K d a 2 π R r 2 4 π D 4 1 0 0.2 β D k m δ P r d K d a   .
Taking the logarithm of inequality (4) and performing the transformations (let one perform the so-called translation in decibel form), we obtain the following:
40 lg 4 π + 10 lg 2 π + 40 lg D + 2 β D k m 10 lg δ + 10 lg P r d 20 lg K d a 10 lg P e m 20 lg R r .
Based on the fact that the number π is a constant, we denote the sum ( 40 lg 4 π + 10 lg 2 π ) through 2K. After transformations, we finally obtain the following:
20 lg D + β D k m + K 1 2 10 lg δ + 10 lg P r d 20 lg K d a 10 lg P e m 20 lg R r ,
where K ≈ 10.98 dB.
Expression (6) is commonly called a non-strict inequality of the acoustic detection range. Its left-hand side is called the regularity of the decay of the intensity of the propagating acoustic wave, and is designated as Ψ D , f . On its right side, there are five terms, expressing, in decibel form, the values of the four main technical parameters of the acoustic device, namely the following: the recognition coefficient δ, the sensitivity of the receiving device P r d , the directional coefficient K d a , the radiation power P e m , and the radius of the reflector R r . This part of the inequality is denoted as E a d —the energy potential of the acoustic device. Now, expression (6) will take the following form:
Ψ D , f 1 2 E a d .
Replacing, on the left side, the length of the acoustic base L (the acoustic base is the distance between the emitter-receiver and the reflector) in expression (6) with the current value of the distance D, and on the right side, denoting the energy potential of the acoustic device as a functional dependence on the pulse power of the acoustic device, we obtain (8):
Ψ L , f 1 2 E a d P i m p .
Consequently, the use of pulse (discrete) scanning in acoustic devices allows the monitoring of the condition of large internal spaces without increasing the electrical power consumption of the facility.

2.2. The Influence of Temperature Variability in the Indoor Air Environment, as a Result of Season and Day Changes, on the Change in the Speed of Sound

When studying the acoustic properties of the atmosphere, ensuring control of the state of the air environment in a local room, dependence (9) [57] was obtained, determining the change in the speed of sound from the change in the primary parameters of the air environment:
C = γ R T M .
where γ is the adiabatic index (1.4 for a normal atmosphere); R is the gas constant (8.314 J/(mol·K)); M is the molar mass; and T is the temperature, K (K = 273.15° + T °C).
Assuming that the primary parameters of the air environment, with the exception of temperature, remain unchanged, expression (9) allows us to obtain a functional dependence of the speed of sound on air temperature:
C = ϕ T C .
Under these same assumptions, the second functional dependence (11) is also valid, allowing us to obtain the temperature value from the measured values of the speed of sound in the atmosphere:
T C = f C .
Since, in the measuring acoustic device, the speed of sound is measured by the time t i of the passage of the acoustic wave along the measuring base L to the reflector and back, the temperature in the room T C will be determined by the set of n measurements performed (12):
T C = φ 1 n i = 1 n C i L , t i .

2.3. The Mathematical Model of Acoustic Temperature Control in an Isolated Room, and the Detection of Fires and Outbreaks by Means of Pulse Acoustic Probing

It is expected that the mathematical model of acoustic temperature control in an isolated room, and the detection of fires and outbreaks inside it by means of pulsed acoustic probing, should solve a number of tasks. Firstly, the task of pulse probing of the entire volume of the room. Secondly, measuring the air temperature in the entire space of the room. Thirdly, detecting fires and conflagrations inside the room. The fourth task is to take into account the dimensions (volume) of the room when detecting fires.
The first task is solved by using dependence (8), which is the functional dependence of the length of the acoustic base L on the pulse power of the acoustic device. This dependence allows, depending on the size of the rooms, the calculation and determination of the main technical parameters of the designed acoustic devices. Conversely, having a set of acoustic devices or technical characteristics of electronic elements, it is possible to calculate the size of rooms that can be served by these devices.
The solution to the second task is provided by expression (12), which, based on the time ti of the passage of the acoustic wave along the measuring base L to the reflector and back, allows the determination of the average air temperature in the room, based on the totality of the measurements taken. The value n is determined by the number of reflectors located along the perimeter of the room.
The first and third acoustic paths go along the walls of the room. The fifth acoustic path goes diagonally across the room. The second and fourth tracks are located between them, ensuring complete coverage of the entire room. Any increase in temperature in the local volume will lead to an increase in the speed of sound and a decrease in the signal travel time along the corresponding acoustic track. This in turn allows us to fix the expression (12).
To solve the third task, it is necessary to establish threshold values for the air temperature in the room T   t h r (for example, 50 °C or 60 °C) and the value of the temperature increase per second (time gradient) Δ T t h r (for example, 0.25 degrees/s); exceeding these indicates the beginning of the combustion process. The formalized mathematical record of the solution to this problem will look like system (13):
Δ T j + 1 = T j + 1 T j Δ T j + 2 = T j + 2 T j + 1 T j + l > T   t h r Δ T j + l > Δ T t h r Δ T j + l = T j + l T j + l 1
where l is the number of successively emitted probing acoustic pulses, according to the results of which the reception of the reflected signals shows that the temperature threshold has been exceeded and the rate of its growth has exceeded the threshold value.
The number l can be chosen in accordance with the likelihood criteria, for example, three excesses out of three, or four out of four, or five out of five, and others.
Combining dependencies (8), (12), and system (13) into one system, we obtain the mathematical model (14):
Ψ L , f 1 2 E a d P i m p T C = φ 1 n i = 1 n C i L , t i Δ T j + 1 = T j + 1 T j Δ T j + 2 = T j + 2 T j + 1 T j + l > T t h r Δ T j + l ` > Δ T t h r Δ T j + l = T j + l T j + l 1 ,
where Ψ is the regularity of the acoustic field decline; L is the length of the acoustic measurement base; f is the scanning frequency of the acoustic device; E a d is energy potential of an acoustic device; P i m p is pulse power of the acoustic device radiation; T C is the room temperature; n is the number of reflectors in the room; Ci is the measured value of the speed of sound; ti is the signal travel time along the acoustic path; T t h r is the threshold value of the air temperature in the room; Δ T t h r is the threshold value of the temperature increase per second; l is the number of successive probings; and j is the number of measurement cycles by which the temperature difference (temperature gradient) is determined.
Solving the fourth task requires taking into account downward (ascending) convective movements of air in rooms with high ceilings (production facilities, warehouses, objects of high cultural, historical value, and/or religious purpose). In particular, flash and ignition in such large rooms due to convection heat transfer may not be taken into account in a timely manner, which may lead to the development of a fire. Accordingly, for high-rise buildings, it is necessary to place sensors on several levels.
Then, solving the problems of taking into account (controlling) the temperature of the air environment, and detecting fires inside the premises at a certain level g, will look like this:
T g   C = φ 1 n i = 1 n C i L , t i Δ T j + 1 g   = T j + 1 g   T j g   Δ T j + 2 g   = T j + 2 g   T j + 1 g   T j + l g   > T t h r g   Δ T j + l g   > Δ T t h r g   Δ T j + l g   = T j + l g   T j + l 1 g   ,
where T g   C is the room temperature at a certain level g ; T t h r g     is the threshold value of the air temperature in the room at a certain level g ; and Δ T t h r g   is the threshold value of temperature increase per second at a certain level g .
Thus, taking into account the volume (height) of the room according to (15) requires transforming the mathematical model (14) into the following expression:
Ψ L , f 1 2 E a d P i m p T 1   C = φ 1 n i = 1 n C i L , t i Δ T j + 1 1   = T j + 1 1   T j g   Δ T j + 2 1   = T j + 2 1   T j + 1 1   T j + l 1   > T t h r 1   Δ T j + l 1   > Δ T t h r 1   Δ T j + l 1   = T j + l 1   T j + l 1 1   T 2   C = φ 1 n i = 1 n C i L , t i Δ T j + 1 2   = T j + 1 2   T j 2   Δ T j + 2 2   = T j + 2 2   T j + 1 2   T j + l 2   > T t h r 2   Δ T j + l 2   > Δ T t h r g   Δ T j + l 2   = T j + l 2   T j + l 1 2   T k   C = φ 1 n i = 1 n C i L , t i Δ T j + 1 k   = T j + 1 k   T j k   Δ T j + 2 k   = T j + 2 k   T j + 1 k   T j + l k   > T t h r k   Δ T j + l k   > Δ T t h r k   Δ T j + l k   = T j + l k   T j + l 1 k   ,
where T 1   C is the room temperature at level 1; T 2   C is the room temperature at level 2; T k   C is the room temperature at level k; T 1 is the threshold value of the air temperature in the room at level 1; T 2 is the threshold value of the air temperature in the room at level 2; T k is the threshold value of the air temperature in the room at level k; Δ T t h r 1 is the threshold value of the temperature increase per second at level 1; Δ T t h r 2 is the threshold value of the temperature increase per second at level 2; Δ T t h r k is the threshold value of the temperature increase per second at level k; and the number of levels in the room varies from 1 to k.
The lower the temperature, the lower the speed of sound, and the longer it takes for the signal to travel from the emitter to the reflector and back. The higher the temperature, the higher the speed of sound, and the less time it takes for the same signal to travel from the emitter to the reflector and back. A fire indication is a temperature gradient where the temperature increases from 20 to 50 degrees in minutes (not hours).
The functioning of the model for the g-level is illustrated in Figure 1. When the temperature in the room corresponds to normal operating conditions ( T n o r m a l g     ), the passage time of probing pulses (for example, five probing pulses, as shown in Figure 1) is greatest. With an increase in air temperature ( T g   ) in the room, the time of passage of probing pulses along acoustic paths decreases as a result of an increase in the speed of sound caused by an increase in temperature. When the temperature threshold value ( T t h r g     ) is exceeded, the speed of sound increases even more, as a result of which the transit time becomes even shorter, and the interval between the reflected signals decreases.
The solutions to this mathematical model will allow the solving of a wide range of problems in both the field of fire extinguishing and safety in general: monitoring room temperature, detecting fires, monitoring humidity, moving objects into (or from) a room, monitoring the tightness of rooms and access to them, etc.
The presented model can be further implemented in various variations. However, as for most new technologies, the cost of the equipment is a limitation. According to previous studies of the applicability of acoustic technology in fire extinguishing, where the error of sound speed meters is in the range of 0.005–0.01 m/s [54], and the total error does not exceed 5% [21], one can also expect high accuracy and reliability within the framework of this model. The speed of determination, considered as the reaction time (response) of the proposed system, will be comparable to tenths of a second. The response time is the ratio of the value of the acoustic base L to the speed of sound propagation in the room, and is determined by the acoustic base, taking into account the direct dependence of the change in the speed of sound on temperature.
Further improvement of acoustic control systems and early detection of blockages can be achieved by combining them.

3. The Use of Vision Robots to Serve Humans

3.1. Using State-of-the-Art Techniques for Flame Detection

As shown in the work of many researchers, based on the literature review, artificial intelligence can be successfully applied for flame detection to serve humans. The use of modern techniques to apply artificial vision systems may be both a primary and a secondary means of fire detection.
Information on the use of stereovision to develop a novel instrumentation system to extract geometric characteristics from the fire front, computer vision-based methods [58,59,60], flame detection algorithms, some classifications, and edge detection techniques [61,62,63] can be found in many research papers [64,65,66,67,68,69,70,71,72,73,74,75,76]. The designed system may detect the presence of humans in addition to fire detection, with the goal of increasing the effectiveness of rescue operations aimed primarily at rescuing people. One example is the Human-like Visual Attention-based Artificial Vision (HVAAV) system, which uses machine learning to achieve artificial human-like visual attention [77]. By hybridizing models, it becomes possible to tune the parameters of the system to human perception of vision, which can influence emergency services, as well as mobile robots that extinguish flames, to take proper action.
In practice, many fire pixel detection techniques are used, as well as color models. The algorithms used often use color rules in different color spaces (RGB, YCbCr, HSI, HSV, YUV, L*a*b*) or combinations of these. The efficiency of the rules varies depending on the type of fire pixel. Therefore, the selection of appropriate rules is a problematic issue.
A detailed consideration of this topic, including saliency detection (saliency maps), segmentation, and extraction of salient objects, is presented in [77], including a description of the model, visual attention, human-like visual attention through the human ‘Eye Fixation’ paradigm, a Genetic Algorithm (GA)-like tuning process, data filtering, and, finally, interpretation of the visual attention map. Fire pixels can be classified based on fire color and the presence of smoke, while non-fire pixels can be classified based on average image intensity. Such robots can be used in crisis management, which is outlined in this paper.
It becomes possible to design a system based on artificial vision systems using mobile robots. These robots can be wheeled robots, as well as robots for special tasks, allowing one to extinguish flames. Research [77] shows an example and evaluation of such a system. It was implemented with the use of a mobile robot from NEXTER Robotics (Wifibot-M). It may be controlled both by wireless (Wi-Fi) and wired (Ethernet) methods. The software driver is dedicated to the Windows operating system, allowing the robot to be controlled with a joystick or using a virtual simulation. Camera images can be captured and then collected using the AXIS Encoder web interface or, for example, the dedicated AXIS Camera Client application.
In practice, using artificial intelligence to detect flames, networks learn from static images or video streams [78,79,80,81,82,83]. This requires a computer module, a camera, and a powerful image processing unit. Many processing systems may be applied for network implementation (e.g., CPU, GPU, VPU). The controller can generate a variety of video signals. Such an intelligent sensor does not need to be expensive. Hardware could be based on a Raspberry Pi board, which allows for the support of multiple operating systems. In software, many libraries, such as OpenCV 4.11, Matplotlib 3.10, TensorFlow 2.0, Imutils 0.5, and NumPy 2.2, may be applied. The system can be standalone, or it may work on other hardware platforms. In addition, it is possible to integrate it with an acoustic extinguisher using relay modules (24 V control signals), which can enable (if flames are identified) a discrete activation of the extinguisher, without the need for additional sensors. For fire sensors and fire detection systems that utilize cameras and machine vision, various machine learning algorithms for fire detection and different types of interface can be applied for data transmission. When sensors have built-in data processing and fire detection capabilities, widely used communication interfaces, such as RS485, Modbus, or building automation protocols such as BACnet or KNX, can be employed. In cases where a larger amount of data needs to be exchanged, interfaces such as Ethernet or Wi-Fi can be applied. If sensors or sensor systems need to operate wirelessly, protocols such as LoRa or NB-IoT can be utilized. A case study of the use of neural networks for flame detection, supported by examples, is provided in the next part of the paper.

3.2. Fire Detection Techniques Using Artificial Vision Systems and Video Cameras

As stated previously, an acoustic fire extinguisher may be equipped with a smart sensor. Research on this topic is being conducted, which is a scientific novelty. In this context, research and achievements in terms of fire detection are essential.
In practice, since 2000, intensive research has been conducted on the feasibility of using computer vision and cameras to detect fires, especially in uninhabited areas and in wildlife (e.g., [60,78]). An overview of the performance of pixel detection algorithms for these cases can be found in the works [60,84,85]. Depending on the environmental conditions (unstructured or structured), the results are different. For example, the colors yellow, orange, and red dominate in intervals in outdoor fires. Green, yellow, and brown appear in outdoor experiments (scenes are unstructured). Multiple color spaces and their combinations can be applied for detection, which will be the subject of the next section [73,74,75,76,84,85,86,87,88,89,90,91].
The resulting set may be used to compute functions and constants for the rules, as well as for pixel training [71]. After training, each pixel can be assigned a probability that indicates to what extent the pixel is part of the flame. Given an appropriately chosen threshold, this results in candidate pixels for further processing [92]. The important feature is the percentage of fire pixels in a particular image, which is obtained by dividing the number of fire pixels by the total number of all image pixels. The fire pixels detected in this way are further processed based on other features. Important features are those that deal with geometric information, such as the degree of spread, height, and slope of the flames [71,93,94,95]. Indeed, not only color properties (flame pixels) [66,78,86], but also shape irregularity [96], texture [97], and even dynamics [68,98] are useful for flame detection. To make flame detection more efficient, some features can be combined (e.g., [92,99]), where the pixel detection criterion, as well as methodology, are usually tested with isolated parameters [59]. In addition, hybrid flame detection frameworks and probabilistic methods are used to improve detection performance [92,100,101,102]. From the point of view of firefighting action, the rapid extinguishment of flames is crucial [103,104]. The infrared band can also be applied for data analysis [38,105,106]. Table 1 lists the references cited and the properties they consider for fire detection techniques [46].
In general, none of the imaging techniques are fully universal, which means that they are not adapted to segmenting all fire scenarios, and thus their performance will not always be the same. A problematic issue is the lack of a universal fire imaging data set that would be universally accepted as a benchmark and used to test different methods. Only then would it be possible to reliably compare the results obtained with their use. Such images may be applied to analyze fusion algorithms, the performance of fire segmentation processes, or the use of motion, as shown in [58,59,77]. To estimate the performance of the algorithms, benchmarking can be used. Since color segmentation is generally the first step in the fire segmentation process, the selection of an appropriate algorithm is crucial [60].

3.3. Fire Detection Categories

In the last decade, computer vision has started to be used for effective fire detection [39,40,107,108], early fire extinguishment [109], fire measurement, and prediction of fire behavior [110,111]. The most important step in computer vision analysis is the detection of fire pixels, as this determines the accuracy of further processing.
In practice, there are several categories of fire detection methods. Some use color rules, while others are based on machine learning. In the second case, the learning is performed on a data set that contains fire pixels and non-fire pixels. Since it is a key issue that the database includes heterogeneous color images, they are selected to capture as many different flame contexts as possible in a heterogeneous environment. The performance of each method varies because it depends on many factors, such as the color of the fire, the type of fuel, the presence of smoke, and the environmental conditions [60].
Color-based salience detection can be implemented with binary models and exponential functions [61]. In addition to color and motion detection, among other things, edge detection, created on the basis of mathematical models and the resulting algorithmic solutions, is relevant in this field. In general, it is possible to process a specific feature in a separate scheme, an example of which is given in [62]. On the basis of the results presented there, it was shown that it is problematic to select a detector that has the same efficacy in many cases. Mathematical methods known from security sciences can be used for forecasting [112,113,114,115]. In practice, the features used, together with machine learning classifiers and artificial intelligence, allow for efficient fire detection (the subject of scientific effort is to reduce the false detection rate to a minimum).
A scientific innovation made in recent years has been the combination of image processing techniques or dedicated expert systems that can support human recognition abilities, but none of these approaches has been fully implemented as an embedded solution [61,63,64,65]. Therefore, the authors decided to try to fill the gap in this area. From a practical point of view, fire segmentation algorithms apply color criteria in different color spaces [116]. To estimate the performance of different segmentation methods, standard metrics can be used, whose role is to compare segmented images with manually segmented images [60]. Many performance metrics are helpful for this purpose, such as the Matthews Correlation Coefficient [69], F-score [70,71], and the Hafiane Quality Index [72]. Machine learning involving logistic regression generally obtains the best performance (the highest robustness against smoke and color changes is reported). Therefore, the results achieved by the logistic regression method for most categories are the best [71]. This includes determining fire pixels that contain smoke and fire pixels without smoke (the support vector is useful for classification) [60]. A popular approach is fire detection using Bayes’ theorem, which allows for multiple representations of conditional probabilities. This method may be applied without a prior learning step. In practice, a measurement vector can be formed, for example, based on texture (according to the entropy of the fire pixels, one checks whether the fire is textured), combinations of different color spaces, and a multidimensional Gaussian distribution (the ratio of probabilities a priori for fire and non-fire classes can be changed, depending on the given conditions) [60].

3.4. Pixel Detection Methods

In practice, two basic types of models can be distinguished: empirical models with experimental thresholds [66,68,78], and statistical models [67,68] that use real data for training (see Table 2).
For flame detection, the number of images is important. The information they contain is crucial for the choice of color space and for rules for the detection of fire pixels. In practice, the more images, the better the color space and fire pixel detection rules that may be chosen [71]. There are many algorithms for fire detection. For example, for the visible spectrum, the algorithms applied often use rules in different color spaces (RGB, YCbCr, HSI, HSV, YUV, L*a*b*) or combinations of these. Application examples of color spaces used in roles in the visible spectrum are listed in Table 3. From a practical point of view, each pixel detection method is a logical combination of rules that contain basic mathematical operations. The paper [71] discusses, in detail, the issues of expert systems, majority voting, and machine learning, specifically logistic regression, using rules as features to combine rules. It is difficult to compare the performance of the color segmentation techniques used, as it depends on the category of the image. Factors such as lighting, dominant color, or the presence of smoke translate into the results achieved. In general, lighting conditions can affect the natural colors of fires. Therefore, the choice of algorithm in an operational scenario depends on many factors. Since RGB models are sensitive to luminance changes [68,73,74,75,76,84], it is common to apply chromatic models in YCbCr [66,85] and HSV [86,87], or other [88,89,90] spaces. This means that fire pixels are marked with one of the available colors in a certain color space. The dominant color is always the one that covers most of the fire pixels. The work [74] describes a method based on supervised learning in the RGB color space that is resistant to smoke. The segmentation presented in [84] gives good results for daytime fires (without smoke). When using the YCbCr color space, good results were obtained for orange pixels [66,85]. Similar results were achieved when using the L*a*b* space [89]. In practice, it is possible to use combinations of different color spaces, which allows increased effectiveness of the techniques applied, depending on the color of the pixels (examples for orange and yellow pixels are given in works [78,107], while for fire pixels without smoke, examples are given in [91]). In general, tests performed using real images include flame detection (the most common case). Smoke detection or wildfire detection is also possible [71,105,106,107,117]. The method described in [116] works well when the intensity of the environment changes. In turn, in [71], a technique applying logistic regression was used, which turned out to be robust against changes in smoke and color. There, a combination (majority voting and logistic regression) was described. In practice, Bayesian segmentation is robust against fire color changes [60]. It is worth highlighting that combinations of different color spaces may be applied [91], which is a commonly used method.
As previously mentioned, firstly, the crucial operation is the detection of fire pixels, as it affects the accuracy of further data processing [59]. The fire emits radiation in a wide spectral band (the color may change, and it may have different luminance, depending on the background and brightness). The environment affects the image, as do the physical conditions; a prime example is smoke, which can mask the fire region. In practice, the percentage of fire pixels overlaid with smoke is determined with pixel classification (into pixels with and without smoke), performed using a support vector machine [59,71]. In practice, the rules are analyzed by means of pixel categories, which are created on the basis of a specific classification, including both fire pixels and non-fire pixels. To build a learning set of pixels, the fire pixels are classified into several categories, depending on the color and the presence of smoke [71]. In turn, non-fire pixels are classified into several brightness levels. These categories are subjected to additional classification, an example of which is given in Figure 2. In this case, there are some basic categories depending on the color (red, orange, yellow-white) and the presence of smoke (smoke is present or not). Non-fire pixels that have high brightness represent the color of the sky and smoke. In turn, a green color is most often present for pixels with low and medium brightness, suggesting that their source is the plants seen in the images [59].
These colors demonstrate the pixel colors used for fire detection. Their mapping to a specific color palette or to certain RGB values is performed within the corresponding vision system, along with the necessary adjustments for brightness and contrast of the images obtained from the vision sensor. Within each category, individual pixels are represented by a feature vector that is created based on color features (from different color spaces). The average feature vector is calculated for each category. Pixels are sorted according to the distance of their vectors from the average feature vector. Then, they are sampled uniformly, resulting in a desired number for each category. For ease of processing, learning pixels can be arranged in such a way that an image is formed whose individual parts correspond to the pixels. The results depend on the category of pixels. From this, it can be deduced that, depending on the color, the presence of smoke, or the intensity, different rules for different color spaces give different results. For example, the rules proposed by T. Toulouse, L. Rossi, T. Çelik, and M. Akhloufi are more effective for fire pixels without smoke (especially for orange pixels) [71]. Their paper provides a comparative analysis of state-of-the-art fire color detection techniques in the context of geometric measurements. The methods described are effective in many categories. A comparison of the rules can be found there (the results are compared) [71].
The paper [92] describes an autonomous method for flame detection in video recordings. For this purpose, a detection system based on the properties of color, dynamic range, texture, and flame flickering is proposed, which is realized by analyzing the variations in pixels in the wavelet domain. The accuracy obtained exceeds 95%, which is a good result. For this purpose, a hybrid flame detection system is suggested [92]. The color distribution is modeled by a Gaussian Mixture Model with an estimated number of Gaussian components obtained by a Dirichlet process; thus, it avoids the deviations that result from an improper number of Gaussians that were experimentally determined. An incorrect number of components translates into a poor estimate of the means and covariance matrix, which in turn affects the estimate of the flame color distribution. Therefore, a hybrid flame detection framework based on the Dirichlet Process–Gaussian Mixture improves the detection performance [92,100,101]. The flame color model based on this approach is processed together with saturation analysis and wavelet transform. The method proposed in [92] can detect most of the fire pixels, including those behind the smoke. Although the model omits some pixels from the inner part of the flame, it should be noted that the successfully detected contour fire pixels better reflect the dynamic properties of the flame [92].
On the basis of this analysis, it can be concluded that no technique is universal (not equally effective for segmenting all fire scenarios). The combination of multiple rules, using different color spaces, allows one to obtain a better result compared to the result achieved with the use of individual rules. Therefore, the main difficulty is related to the selection of appropriate rules, which are created depending on the type of fire pixels, as they do not always have the same efficiency for all types of fire pixels. The difficulty comes down to choosing the proper rules to combine and assigning them the appropriate weights. Some methods combine rules with the same weight, while others use machine learning when all rules are input rules. The effectiveness of combining individual rules and methods depends on the category (as stated earlier, no single rule can perform equally well in all categories). The image category affects the implementation of color segmentation (e.g., color, smoke, etc.). Therefore, depending on these factors, different results are obtained for the processed data set images. Since the images from the data set are characterized on the basis of fire, color, brightness, and smoke, it becomes possible to determine the usefulness of a particular algorithm. In turn, information on additive networks can be found in [118]. Some approaches require significant computational power and do not operate in real time, which is critical when rapid flame suppression is needed [102,103,104]. It is important that the scaling time does not exceed 1 s, and that the distance between the camera and the flame location is small [77]. Information on the classification of flame extinction based on acoustic oscillations using artificial intelligence methods can be found in [33]. In practice, acoustic fire extinguishers may provide an additional means of fire protection, and can be permanently installed in areas exposed to fire or be mobile fire protection devices installed, e.g., in robots capable of visual image processing. It is worth noting that emergency management expenditures are increasing, resulting in increased interest in the use of robots for various purposes [119,120]. In the case of adverse events, crisis management plays a key role [121,122], and communication is then often carried out using a variety of wireless technologies, especially satellite telecommunications [123,124,125], whereby the quality of signals, depending on their type, is determined by a variety of factors [126,127,128,129,130,131]. The spectrum of service integration translates into a change in the way in which fire risks are identified and assessed, as well as controlled [132,133,134].

4. Unresolved Problems

For firefighting operations, modern technologies based on artificial vision systems implemented in robots may support the work of firefighters. In general, the behavior of a given system can be adapted to the firefighter’s way of looking. The benefit of such a solution is to obtain artificial visual accuracy, which may help firefighters to assess intervention conditions or rescue operations [77]. An image that has one distinct flame is considered simpler for vision processing than an image containing several flames. Due to the perception of the human eye, complex or ambiguous images may or may not attract the attention of a potential human. This is important because during a fire, there is a need for firefighting and rescue operations, especially where they are most desired and urgent. This solution is particularly applicable to sites and objects that are far from human concentrations, in forests, taiga areas, and sparsely populated areas. However, such an environment limits the application of some flame suppression technologies.
In practice, as previously mentioned, many types of communication can then be used, including free-space optical interconnects. It is also important to take into account specific propagation conditions, depending on the frequency band used for communication [135,136]. In the long term, further work can be expected to increase the range of intelligent acoustic extinguishers, which, equipped with an intelligent module, do not require human presence [34,35,36,37,38,45]. An example of a system hardware based on edge computing AI HAT and built with a Kendryte processor is provided in [137]. It is a powerful and inexpensive artificial intelligence platform that can be applied as an independent board. It is equipped with a neural network processor and a dual-core 64-bit CPU. In addition, the board has several peripherals, including PWM/SPI/I2C GPIO, an LCD screen, and camera interfaces. It can be applied to implement deep neural networks to detect flames. A detailed description of vision fire sensors can be found in publications [34,35,36,37,45], amongst others. Such systems detect firebreaks, and could be connected to acoustic fire extinguishers. Figure 3 shows an example of a solution that can be applied to the design of a fire detection system. In the range of practical implementation, the left object is the sensor, and the right object is the electronic board inside it. In the processor, a neural network that can detect fires is inserted (Figure 3b).
Moreover, it is important to study the extinguishing capabilities of acoustic waves emitted from sound sources [39,40,41,42,43,44,45,46,47,48,138,139,140,141,142]. Since this technology has not yet been fully explored, research is being conducted on both the use of modulated and unmodulated waves to extinguish flames, depending on the class of fire [21,38,41,45,46,47,48,140,143,144,145,146]. Apart from the work of the authors and a few others, most of which are included in the bibliography of this article, there is a lack of research conducted in this area, especially using waves emitted by sound sources with high and very high acoustic power. Further work aimed at using firefighting drones in the initial stage of a fire can also be expected [147], which is a promising research direction. A separate group is conceptual work and research to learn about new flame-retardant materials [148,149,150,151,152,153,154], from which, e.g., components of fire extinguishers may be built [13].
Since visible-band flames can be detected using infrared, images may be processed in infrared. Then, a much higher intensity level of the fire pixels compared to other pixels is observed [107]. In this case, it is crucial to determine an appropriate threshold at which it becomes possible to distinguish fire pixels from background pixels. However, false alarms can be caused by cloudiness during infrared processing (clouds may be in the image) [59], which is important for detecting wildfires, but does not apply to indoor detection. Non-fire pixels that have high brightness refer to the color of the sky and smoke. Problematically, processed images can have fire-like areas corresponding to hot gasses [59]. Since it is easier to detect a fiery pixel in infrared images, modern algorithms are often developed using image fusion. Performance comparisons between different algorithms based on image fusion or pixel motion are then performed, based on commonly known criteria. More research is needed in this area to improve efficiency.
The social impact of fire detection using machine vision sensors and systems is immense. These systems not only enable the automated detection of fires in forests, agricultural areas, and populated locations, but also significantly enhance the efforts of firefighters in preventing the occurrence and escalation of large, life-threatening fires. The use of such systems ensures public safety, helps to protect citizens’ property and lives, and provides an added layer of security. The use of acoustic technologies in fire extinguishing is promising. At the same time, the issue of their practical implementation remains open. The costs of building and testing complete equipment are quite high; however, with sufficient interest from businesses and implementation in production, these costs can be expected to decrease. The next issue that will need to be addressed at the national (international) level is the regulatory framework for the use of acoustic technologies in fire extinguishing, including with the use of other systems (artificial intelligence, artificial vision, etc.). The positive aspects of using such technology include the absence of the need for additional costs for consumables (as when using water and fire extinguishing agents) when using it, environmental friendliness, and the possibility of combining it with other subsystems, as well as the preservation of cultural, historical, and religious sites and valuables when using this technology in practice. Although the cost of some fire detection systems that utilize cutting-edge technologies may be high, their deployment ensures that fires with destruction far exceeding the systems’ value are prevented.

5. Conclusions

The acoustic method can be considered as an alternative to known methods of preventing and extinguishing fires of liquid, solid, and gaseous substances; however, it requires further research. Undoubtedly, the use of artificial intelligence, by integrating vision systems and cameras, makes it possible to detect flames not only in open areas, but also in closed spaces, which is particularly important when the use of classical sensors is severely limited or impossible.
This paper proposes a mathematical model of acoustic temperature control in a local room and the detection of fires and outbreaks inside it by means of pulse acoustic probing. This model is a system of three dependencies. The first dependency makes it possible to calculate and determine the main technical parameters of the designed acoustic devices for rooms with known dimensions and, conversely, to calculate the dimensions of rooms that can be serviced by specific devices. The second dependency allows one to determine the average air temperature in the room, taking into account the measurements taken. The third dependency, due to the exceeding of the threshold value of the air temperature in the room and of the threshold value of the temperature increase per second, ensures the detection of fires and outbreaks inside the room. Taking into account the volume (height) in large rooms is ensured by introducing a certain number of temperature control levels for the second and third dependences. It is expected that the implementation of the developed model will allow for both the creation a new generation of devices and instruments for preventing fires inside the premises, and the modernization of existing fire alarm systems of premises, as well as the solving of a wide range of problems in the field of security, including the protection of objects of high cultural, historical, and religious value.
Notably, visual relevance plays an important role in gaining a better awareness of the environment under study. Computer vision makes it possible to detect flames and predict the behavior of a fire. The effective and fast detection of flames allows one to reduce the losses caused by fire. For the purposes of firefighting actions, modern technologies based on artificial vision systems can be used, which may be implemented in robots or be part of the firefighting system, e.g., using environmentally friendly acoustic technology on the basis of European cooperation, which is specially designed for indoor use. Such systems can work together as a standalone platform, or they can cooperate with other modules. In practice, artificial intelligence may be applied in industrial applications, such as production lines, where it is possible to use acoustic flame extinguishing technology in a simple way. Fire detection sensors can be integrated into smart city infrastructures by connecting them to Internet of Things networks for real-time monitoring and data sharing, enabling faster fire response. These sensors can be connected with building automation systems of smart buildings to coordinate emergency actions in the buildings. Fire detection systems can also be integrated with emergency response systems to provide precise location data and improve coordination during fire emergencies. Additionally, artificial intelligence algorithms can enhance fire detection and optimize evacuation plans, while public notification systems can alert citizens in real time. Therefore, further research related to eco-friendly acoustic firefighting and flame detection using artificial intelligence is expected. This includes new work aimed at improving flame detection methods using visible- and infrared-band cameras and enhancing the effectiveness of the algorithms applied (more than 90–95%).
The development and implementation of acoustic fire extinguishing systems using artificial intelligence and computer vision is expected to contribute to the prevention (early detection) of fires, which in turn will improve the environmental, economic, and social development of individual industries, regions, and countries.

Author Contributions

Conceptualization, V.L., G.W.-J., J.L.W.-J. and S.I.; methodology, V.L., G.W.-J., J.L.W.-J. and S.I.; software, S.I.; validation, V.L. and J.L.W.-J.; formal analysis, J.L.W.-J.; investigation, J.L.W.-J., V.L., M.D., G.W.-J., R.S., S.I. and V.S.; resources, V.L., J.L.W.-J., R.S. and S.I.; data curation, V.L., S.I. and V.S.; writing—original draft preparation, J.L.W.-J., V.L., M.D., G.W.-J. and S.I.; final writing—review and editing, J.L.W.-J., V.L. and M.D.; visualization, V.L., J.L.W.-J., R.S. and S.I.; supervision, J.L.W.-J., V.L. and G.W.-J.; project administration, J.L.W.-J. and V.L.; funding acquisition, J.L.W.-J. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Ministry of Science and Higher Education under the “Inkubator Innowacyjności+” program (grant no 3/2017).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their sincere gratitude to the companies “Ekohigiena Aparatura Ryszard Putyra” and “PHT SUPON” for supporting the development of the research project within the scope of the “Inkubator Innowacyjności+” program.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Fire Statistics. Report No 29. Center for Fire Statistics of CTIF 2024. Available online: https://www.ctif.org/sites/default/files/2024-06/CTIF_Report29_ERG.pdf (accessed on 1 November 2024).
  2. Tryhuba, A.; Ratushnyi, A.; Lub, P.; Rudynets, M.; Visyn, O. The model of the formation of values and the information system of their determination in the projects of the creation of territorial emergency and rescue structures. CEUR Workshop Proc. 2023, 3453, 59–70. Available online: https://ceur-ws.org/Vol-3453/ (accessed on 1 November 2024).
  3. Myroshnychenko, A.; Loboichenko, V.; Divizinyuk, M.; Levterov, A.; Rashkevich, N.; Shevchenko, O.; Shevchenko, R. Application of Up-to-Date Technologies for Monitoring the State of Surface Water in Populated Areas Affected by Hostilities. Bull. Georgian Natl. Acad. Sci. 2022, 16, 50–59. [Google Scholar]
  4. Kochmar, I.M.; Karabyn, V.V.; Kordan, V.M. Ecological and geochemical aspects of thermal effects on argillites of the Lviv-Volyn coal basin spoil tips. Nauk. Visnyk Natsionalnoho Hirnychoho Univ. 2024, 3, 100–107. [Google Scholar] [CrossRef]
  5. NFPA. NFPA 10: Standard for Portable Fire Extinguishers. Available online: https://www.nfpa.org/codes-and-standards/all-codes-and-standards/list-of-codes-and-standards/detail?code=10 (accessed on 1 November 2024).
  6. ISO 3941:2007; Classification of Fires, 2nd ed. International Organization for Standardization: Geneva, Switzerland, 2007.
  7. Jahura, F.T.; Mazumder, N.-U.-S.; Hossain, M.T.; Kasebi, A.; Girase, A.; Ormond, R.B. Exploring the Prospects and Challenges of Fluorine-Free Firefighting Foams (F3) as Alternatives to Aqueous Film-Forming Foams (AFFF): A Review. ACS Omega 2024, 9, 37430–37444. [Google Scholar] [CrossRef]
  8. Sheng, Y.; Zhang, S.; Hu, D.; Ma, L.; Li, Y. Investigation on Thermal Stability of Highly Stable Fluorine-Free Foam Co-Stabilized by Silica Nanoparticles and Multi-Component Surfactants. J. Mol. Liq. 2023, 382, 122039. [Google Scholar] [CrossRef]
  9. Dadashov, I.F.; Loboichenko, V.M.; Strelets, V.M. About the environmental characteristics of fire extinguishing substances used in extinguishing oil and petroleum products. SOCAR Proc. 2020, 1, 79–84. [Google Scholar] [CrossRef]
  10. Gurbanova, M.; Loboichenko, V.; Leonova, N.; Strelets, V.; Shevchenko, R. Comparative Assessment of the Ecological Characteristics of Auxiliary Organic Compounds in the Composition of Foaming Agents Used for Fire Fighting. Bull. Georgian Natl. Acad. Sci. 2020, 14, 58–66. [Google Scholar]
  11. Zhao, J.; Yang, J.; Hu, Z.; Kang, R.; Zhang, J. Development of an Environmentally Friendly Gel Foam and Assessment of Its Thermal Stability and Fire Suppression Properties in Liquid Pool Fires. Colloids Surf. A Physicochem. Eng. Asp. 2024, 692, 133990. [Google Scholar] [CrossRef]
  12. Shcherbak, O.; Loboichenko, V.; Skorobahatko, T.; Shevchenko, R.; Levterov, A.; Pruskyi, A.; Khrystych, V.; Khmyrova, A.; Fedorchuk-Moroz, V.; Bondarenko, S. Study of Organic Carbon-Containing Additives to Water Used in Fire Fighting, in Terms of Their Environmental Friendliness. Fire Technol. 2024, 60, 3739–3765. [Google Scholar] [CrossRef]
  13. Atay, G.Y.; Loboichenko, V.; Wilk-Jakubowski, J.Ł. Investigation of calcite and huntite/hydromagnesite mineral in co-presence regarding flame retardant and mechanical properties of wood composites. Cem. Wapno Beton. 2024, 29, 40–53. [Google Scholar] [CrossRef]
  14. Nabipour, H.; Shi, H.; Wang, X.; Hu, X.; Song, L.; Hu, Y. Flame retardant Cellulose-Based hybrid hydrogels for firefighting and fire prevention. Fire Technol. 2022, 58, 2077–2091. [Google Scholar] [CrossRef]
  15. Zhao, J.; Zhang, Z.; Liu, S.; Tao, Y.; Liu, Y. Design and research of an articulated tracked firefighting robot. Sensors 2022, 22, 5086. [Google Scholar] [CrossRef] [PubMed]
  16. Li, N.; Shi, Z.; Jin, J.; Feng, J.; Zhang, A.; Xie, M.; Min, L.; Zhao, Y.; Lei, Y. Design of Intelligent Firefighting and Smart Escape Route Planning System Based on Improved Ant Colony Algorithm. Sensors 2024, 24, 6438. [Google Scholar] [CrossRef]
  17. Cliftmann, J.M.; Anderson, B.E. Remotely extinguishing flames through transient acoustic streaming using time reversal focusing of sound. Sci. Rep. 2024, 14, 30049. [Google Scholar] [CrossRef] [PubMed]
  18. Shi, X.; Zhang, J.; Zhang, Y.; Zhang, Y.; Zhao, Y.; Sun, K.; Li, S.; Yu, Y.; Jiao, F.; Cao, W. Combustion and Extinction Characteristics of an Ethanol Pool Fire Perturbed by Low–Frequency Acoustic Waves. Case Stud. Therm. Eng. 2024, 60, 104829. [Google Scholar] [CrossRef]
  19. Shi, X.; Zhang, Y.; Chen, X.; Zhang, Y.; Ma, Q.; Lin, G. The Response of an Ethanol Pool Fire to Transverse Acoustic Waves. Fire Saf. J. 2021, 125, 103416. [Google Scholar] [CrossRef]
  20. Taspinar, Y.S.; Koklu, M.; Altin, M. Acoustic-Driven Airflow Flame Extinguishing System Design and Analysis of Capabilities of Low Frequency in Different Fuels. Fire Technol. 2022, 58, 1579–1597. [Google Scholar] [CrossRef]
  21. Loboichenko, V.; Wilk-Jakubowski, J.L.; Levterov, A.; Wilk-Jakubowski, G.; Statyvka, Y.; Shevchenko, O. Using the burning of polymer compounds to determine the applicability of the acoustic method in fire extinguishing. Polymers 2024, 16, 3413. [Google Scholar] [CrossRef]
  22. Wang, Z.; Wang, Y.; Liao, M.; Sun, Y.; Wang, S.; Sun, X.; Shi, X.; Kang, Y.; Tian, M.; Bao, T.; et al. FireSonic: Design and Implementation of an Ultrasound Sensing-Based Fire Type Identification System. Sensors 2024, 24, 4360. [Google Scholar] [CrossRef] [PubMed]
  23. Su, Z.; Qi, D.; Yu, R.; Yang, K.; Chen, M.; Zhao, X.; Zhang, G.; Ying, Y.; Liu, D. Response Behavior of Inverse Diffusion Flame to Low Frequency Acoustic Field. Combust. Sci. Technol. 2024, 1–29. [Google Scholar] [CrossRef]
  24. Xiong, C.-Y.; Liu, Y.-H.; Xu, C.-S.; Huang, X.-Y. Response of Buoyant Diffusion Flame to the Low-frequency Sound. Kung Cheng Je Wu Li Hsueh Pao J. Eng. Thermophys. 2022, 43, 553–558. [Google Scholar]
  25. Xiong, C.; Liu, Y.; Fan, H.; Huang, X.; Nakamura, Y. Fluctuation and Extinction of Laminar Diffusion Flame Induced by External Acoustic Wave and Source. Sci. Rep. 2021, 11, 14402. [Google Scholar] [CrossRef]
  26. Zhang, Y.-J.; Jamil, H.; Wei, Y.-J.; Yang, Y.-J. Displacement and Extinction of Jet Diffusion Flame Exposed to Speaker-Generated Traveling Sound Waves. Appl. Sci. 2022, 12, 12978. [Google Scholar] [CrossRef]
  27. Rai, S.K.; Mahajan, K.A.; Roundal, V.B.; Gorane, P.S.; Patil, S.A.; Gadhave, S.L.; Javanjal, V.K.; Ingle, P. IOT Based Portable Fire Extinguisher Using Acoustic Setup. Panam. Math. J. 2024, 33, 15–29. [Google Scholar] [CrossRef]
  28. Xiong, C.; Liu, Y.; Xu, C.; Huang, X. Extinguishing the Dripping Flame by Acoustic Wave. Fire Saf. J. 2021, 120, 103109. [Google Scholar] [CrossRef]
  29. Jain, S.; Ranjan, A.; Fatima, M.; Siddharth. Performance Evaluation of Sonic Fire Fighting System. In Proceedings of the 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021. [Google Scholar] [CrossRef]
  30. Beisner, E.; Wiggins, N.D.; Yue, K.-B.; Rosales, M.; Penny, J.; Lockridge, J.; Page, R.; Smith, A.; Guerrero, L. Acoustic Flame Suppression Mechanics in a Microgravity Environment. Microgravity Sci. Technol. 2015, 27, 141–144. [Google Scholar] [CrossRef]
  31. Xiong, C.; Liu, Y.; Xu, C.; Huang, X. Acoustical Extinction of Flame on Moving Firebrand for the Fire Protection in Wildland–Urban Interface. Fire Technol. 2020, 57, 1365–1380. [Google Scholar] [CrossRef]
  32. De Luna, R.G.; Baylon, Z.A.P.; Garcia, C.A.D.; Huevos, J.R.G.; Ilagan, J.L.S.; Rocha, M.J.T. A Comparative Analysis of Machine Learning Approaches for Sound Wave Flame Extinction System Towards Environmental Friendly Fire Suppression. In Proceedings of the IEEE Region 10 Conference (TENCON 2023), Chiang Mai, Thailand, 31 October–3 November 2023. [Google Scholar] [CrossRef]
  33. Taspinar, Y.S.; Koklu, M.; Altin, M. Classification of Flame Extinction Based on Acoustic Oscillations Using Artificial Intelligence Methods. Case Stud. Therm. Eng. 2021, 28, 101561. [Google Scholar] [CrossRef]
  34. Ivanov, S.; Stankov, S. Acoustic Extinguishing of Flames Detected by Deep Neural Networks in Embedded Systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 46, 307–312. [Google Scholar] [CrossRef]
  35. Ivanov, S.; Stankov, S. The Artificial Intelligence Platform with the Use of DNN to Detect Flames: A Case of Acoustic Extinguisher. In Proceedings of the International Conference on Intelligent Computing & Optimization 2021, Hua Hin, Thailand, 30–31 December 2021; Volume 371, pp. 24–34. [Google Scholar] [CrossRef]
  36. Wilk-Jakubowski, J.; Stawczyk, P.; Ivanov, S.; Stankov, S. High-power acoustic fire extinguisher with artificial intelligence platform. Int. J. Comput. Vis. Robot. 2022, 12, 236–249. [Google Scholar] [CrossRef]
  37. Wilk-Jakubowski, J.; Stawczyk, P.; Ivanov, S.; Stankov, S. The using of Deep Neural Networks and natural mechanisms of acoustic waves propagation for extinguishing flames. Int. J. Comput. Vis. Robot. 2022, 12, 101–119. [Google Scholar] [CrossRef]
  38. Wilk-Jakubowski, J. Analysis of Flame Suppression Capabilities Using Low-Frequency Acoustic Waves and Frequency Sweeping Techniques. Symmetry 2021, 13, 1299. [Google Scholar] [CrossRef]
  39. Niegodajew, P.; Gruszka, K.; Gnatowska, R.; Šofer, M. Application of acoustic oscillations in flame extinction in a presence of obstacle. In Proceedings of the XXIII Fluid Mechanics Conference (KKMP 2018), Zawiercie, Poland, 9–12 September 2018. [Google Scholar] [CrossRef]
  40. Niegodajew, P.; Łukasiak, K.; Radomiak, H.; Musiał, D.; Zajemska, M.; Poskart, A.; Gruszka, K. Application of acoustic oscillations in quenching of gas burner flame. Combust. Flame 2018, 194, 245–249. [Google Scholar] [CrossRef]
  41. Stawczyk, P.; Wilk-Jakubowski, J. Non-invasive attempts to extinguish flames with the use of high-power acoustic extinguisher. Open Eng. 2021, 11, 349–355. [Google Scholar] [CrossRef]
  42. McKinney, D.J.; Dunn-Rankin, D. Acoustically driven extinction in a droplet stream flame. Combust. Sci. Technol. 2007, 161, 27–48. [Google Scholar] [CrossRef]
  43. Węsierski, T.; Wilczkowski, S.; Radomiak, H. Wygaszanie procesu spalania przy pomocy fal akustycznych. Bezpieczeństwo Tech. Pożarnicza 2013, 30, 59–64. Available online: https://sft.cnbop.pl/pl/bi-tp-vol-2-issue-30-2013-wygaszanie-procesu-spalania-przy-pomocy-fal-akustycznych (accessed on 20 November 2024).
  44. Sai, R.T.; Sharma, G. Sonic Fire Extinguisher. Pramana Res. J. 2017, 8, 337–346. Available online: https://www.pramanaresearch.org/gallery/prj-p334_1.pdf (accessed on 20 November 2024).
  45. Wilk-Jakubowski, J.; Stawczyk, P.; Ivanov, S.; Stankov, S. Control of acoustic extinguisher with Deep Neural Networks for fire detection. Elektron. Electrotech. 2022, 28, 52–59. [Google Scholar] [CrossRef]
  46. Loboichenko, V.; Wilk-Jakubowski, J.; Wilk-Jakubowski, G.; Harabin, R.; Shevchenko, R.; Strelets, V.; Levterov, A.; Soshinskiy, A.; Tregub, N.; Antoshkin, O. The Use of Acoustic Effects for the Prevention and Elimination of Fires as an Element of Modern Environmental Technologies. Environ. Clim. Technol. 2022, 26, 319–330. [Google Scholar] [CrossRef]
  47. Yılmaz-Atay, H.; Wilk-Jakubowski, J.L. A Review of Environmentally Friendly Approaches in Fire Extinguishing: From Chemical Sciences to Innovations in Electrical Engineering. Polymers 2022, 14, 1224. [Google Scholar] [CrossRef] [PubMed]
  48. Vovchuk, T.S.; Wilk-Jakubowski, J.L.; Telelim, V.M.; Loboichenko, V.M.; Shevchenko, R.I.; Shevchenko, O.S.; Tregub, N.S. Investigation of the use of the acoustic effect in extinguishing fires of oil and petroleum products. SOCAR Proc. 2021, 2, 24–31. [Google Scholar] [CrossRef]
  49. Pronobis, M. Modernizacja Kotłów Energetycznych; Wydawnictwo Naukowo-Techniczne: Warszawa, Poland, 2002. [Google Scholar]
  50. Jędrusyna, A.; Noga, A. Wykorzystanie generatora fal infradźwiękowych dużej mocy do oczyszczania z osadów powierzchni grzewczych kotłów energetycznych. Piece Przem. Kotły 2012, 11, 30–37. [Google Scholar]
  51. Noga, A. Przegląd obecnego stanu wiedzy z zakresu techniki infradźwiękowej i możliwości wykorzystania fal akustycznych do oczyszczania urządzeń energetycznych. Zesz. Energetyczne 2014, 1, 225–234. [Google Scholar]
  52. Yu, P. Doronin. Physics of the Ocean; Girometeoizdat: Leningrad, Russia, 1978. (In Russian) [Google Scholar]
  53. Korovin, V.P.; Chvertkin, E.I. Marine Hydrometry; Girometeoizdat: Leningrad, Russia, 1988. (In Russian) [Google Scholar]
  54. Azarenko, O.; Goncharenko, Y.; Divizinyuk, M.; Loboichenko, V.; Farrakhov, O.; Polyakov, S. Mathematical model of acoustic temperature control in a local room, detection of ignitions and fire inside it by pulse acoustic probing. Grail Sci. 2024, 35, 122–135. (In Ukrainian) [Google Scholar] [CrossRef]
  55. Azarenko, O.; Honcharenko, Y.; Divizinyuk, M.; Mirnenko, V.; Strilets, V. The influence of technical and geographical parameters on the range of recording language information when solving applied problems. J. Sci. Pap. Soc. Dev. Secur. 2021, 11, 15–30. (In Ukrainian) [Google Scholar] [CrossRef]
  56. Divizinyuk, M.; Lutsyk, I.; Rak, V.; Kasatkina, N.; Franko, Y. Mathematical model of identification of radar targets for security of objects of critical infrastructure. In Proceedings of the 12th International Conference on Advanced Computer Information Technologies (ACIT), Deggendorf, Germany, 15–17 September 2021. [Google Scholar] [CrossRef]
  57. Azarenko, O.; Honcharenko, Y.; Divizinyuk, M.; Myroshnyk, O.; Polyakov, S.; Farrakhov, O. Patterns of sound propagation in the atmosphere as a means of monitoring the condition of local premises of a critical infrastructure facility. Inter. Conf. 2023, 40, 585–599. (In Ukrainian) [Google Scholar] [CrossRef]
  58. Rossi, L.; Akhloufi, M.; Tison, Y. On the use of stereovision to develop a novel instrumentation system to extract geometric fire fronts characteristics. Fire Saf. J. 2011, 46, 9–20. [Google Scholar] [CrossRef]
  59. Toulouse, T.; Rossi, L.; Campana, A.; Çelik, T.; Akhloufi, M. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef]
  60. Toulouse, T.; Rossi, L.; Akhloufi, M.; Çelik, T.; Maldague, X. Benchmarking of wildland fire colour segmentation algorithms. IET Image Process. 2015, 9, 1064–1072. [Google Scholar] [CrossRef]
  61. Liu, Z.-G.; Yang, Y.; Ji, X.-H. Flame detection algorithm based on a saliency detection technique and the uniform local binary pattern in the YCbCr color space. Signal Image Video Process. 2016, 10, 277–284. [Google Scholar] [CrossRef]
  62. Wilk, J.Ł. Techniki Cyfrowego Rozpoznawania Krawędzi Obrazów; Wydawnictwo Stowarzyszenia Współpracy Polska-Wschód, Oddział Świętokrzyski: Kielce, Poland, 2009. (In Polish) [Google Scholar]
  63. Thokale, A.; Sonar, P. Hybrid approach to detect a fire based on motion color and edge. Digit. Image Process. 2015, 7, 273–277. Available online: http://www.ciitresearch.org/dl/index.php/dip/article/view/DIP102015003 (accessed on 1 December 2024).
  64. Kong, S.G.; Jin, D.; Li, S.; Kim, H. Fast fire flame detection in surveillance video using logistic regression and temporal smoothing. Fire Saf. J. 2016, 79, 37–43. [Google Scholar] [CrossRef]
  65. Chen, X.; Zhang, X.; Zhang, Q. Fire Alarm Using Multi-rules Detection and Texture Features Classification in Video Surveillance. In Proceedings of the 7th International Conference on Intelligent Computation Technology and Automation, Changsha, China, 25–26 October 2014. [Google Scholar] [CrossRef]
  66. Çelik, T.; Demirel, H. Fire detection in video sequences using a generic color model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  67. Töreyin, B.U. Fire Detection Algorithms Using Multimodal Signal and Image Analysis. Ph.D. Thesis, Bilkent University, Ankara, Turkey, 2009. [Google Scholar]
  68. Töreyin, B.U.; Dedeoğlu, Y.; Güdükbay, U.; Çetin, A.E. Computer vision based method for real-time fire and flame detection. Pattern Recogn. Lett. 2006, 27, 49–58. [Google Scholar] [CrossRef]
  69. Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta BBA Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
  70. Blair, D.C. Information Retrieval, 2nd ed.; Van Rijsbergen, C.J., Ed.; Butterworths: London, UK, 1979. [Google Scholar] [CrossRef]
  71. Toulouse, T.; Rossi, L.; Çelik, T.; Akhloufi, M. Automatic fire pixel detection using image processing: A comparative analysis of rule-based and machine learning-based methods. Signal Image Video Process. 2016, 10, 647–654. [Google Scholar] [CrossRef]
  72. Hafiane, A.; Chabrier, S.; Rosenberger, C.; Laurent, H. A New Supervised Evaluation Criterion for Region Based Segmentation Methods. In Proceedings of the 9th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS 2007), Delft, The Netherlands, 28–31 August 2007. [Google Scholar] [CrossRef]
  73. Celen, V.B.; Demirci, M.F. Fire Detection in Different Color Models. In Proceedings of the WorldComp 2012 Proceedings, Las Vegas, NV, USA, 16–19 July 2012; pp. 1–7. Available online: http://worldcomp-proceedings.com/proc/p2012/IPC8008.pdf (accessed on 8 December 2024).
  74. Phillips III, W.; Shah, M.; da Vitoria Lobo, N. Flame recognition in video. Pattern Recogn. Lett. 2000, 23, 319–327. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.4615 (accessed on 8 December 2024). [CrossRef]
  75. Çelik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire detection using statistical color model in video sequences. J. Vis. Com. Image Represent. 2007, 18, 176–185. [Google Scholar] [CrossRef]
  76. Ko, B.C.; Cheong, K.-H.; Nam, J.-Y. Fire detection based on vision sensor and support vector machines. Fire Saf. J. 2009, 44, 322–329. [Google Scholar] [CrossRef]
  77. Madani, K.; Kachurka, V.; Sabourin, C.; Amarger, V.; Golovko, V.; Rossi, L. A human-like visual-attention-based artificial vision system for wildland firefighting assistance. Appl. Intell. 2017, 48, 2157–2179. [Google Scholar] [CrossRef]
  78. Chen, T.; Wu, P.; Chiou, Y. An early fire-detection method based on image processing. In Proceedings of the IEEE International Conference on Image Processing (ICIP’04), Singapore, 24–27 October 2004. [Google Scholar] [CrossRef]
  79. Szegedy, C.; Toshev, A.; Erhan, D. Deep Neural Networks for Object Detection. Adv. Neural Inf. Process. Syst. 2013, 26, 1–9. [Google Scholar]
  80. Janků, P.; Komínková Oplatková, Z.; Dulík, T. Fire detection in video stream by using simple artificial neural network. Mendel 2018, 24, 55–60. [Google Scholar] [CrossRef]
  81. Foley, D.; O’Reilly, R. An Evaluation of Convolutional Neural Network Models for Object Detection in Images on Low-End Devices. In Proceedings of the 26th Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, 6–7 December 2018; Available online: http://ceur-ws.org/Vol-2259/aics_32.pdf (accessed on 12 December 2024).
  82. Simple Understanding of Mask RCNN. Available online: https://medium.com/@alittlepain833/simple-understanding-of-mask-rcnn-134b5b330e95 (accessed on 12 December 2024).
  83. Kurup, R. Vision-Based Fire Flame Detection System Using Optical flow Features and Artificial Neural Network. Int. J. Sci. Res. 2014, 3, 2161–2168. [Google Scholar]
  84. Collumeau, J.-F.; Laurent, H.; Hafiane, A.; Chetehouna, K. Fire scene segmentations for forest fire characterization: A comparative study. In Proceedings of the 18th International Conference on Image Processing (ICIP), Brussels, Belgium, 11–14 September 2011. [Google Scholar] [CrossRef]
  85. Rudz, S.; Chetehouna, K.; Hafiane, A.; Laurent, H.; Séro-Guillaume, O. Investigation of a novel image segmentation method dedicated to forest fire applications. Measurem. Sci. Technol. 2013, 24, 075403. [Google Scholar] [CrossRef]
  86. Yamagishi, H.; Yamaguchi, J. A contour fluctuation data processing method for fire flame detection using a color camera. In Proceedings of the 26th Annual Conference of the IEEE Industrial Electronics Society (IECON 2000), Nagoya, Japan, 22–28 October 2000. [Google Scholar] [CrossRef]
  87. Liu, C.-B.; Ahuja, N. Vision based fire detection. In Proceedings of the 17th International Conference on Pattern Recognition (ICPR 2004), Cambridge, UK, 26August 2004. [Google Scholar] [CrossRef]
  88. Marbach, G.; Loepfe, M.; Brupbacher, T. An image processing technique for fire detection in video images. Fire Saf. J. 2006, 41, 285–289. [Google Scholar] [CrossRef]
  89. Çelik, T. Fast and Efficient Method for Fire Detection Using Image Processing. ETRI J. 2010, 32, 881–890. [Google Scholar] [CrossRef]
  90. Horng, W.-B.; Peng, J.-W.; Chen, C.-Y. A New Image-Based Real-Time Flame Detection Method Using Color Analysis. In Proceedings of the IEEE International Conference on Networking, Sensing and Control, Tucson, AZ, USA, 19–22 March 2005. [Google Scholar] [CrossRef]
  91. Rossi, L.; Akhloufi, M. Dynamic Fire 3D Modeling Using a Real-Time Stereovision System. In Technological Developments in Education and Automation; Iskander, M., Kapila, V., Karim, M., Eds.; Springer: Dordrecht, The Netherlands, 2010; pp. 33–38. [Google Scholar] [CrossRef]
  92. Li, Z.; Mihaylova, L.S.; Isupova, O.; Rossi, L. Autonomous Flame Detection in Videos with a Dirichlet Process Gaussian Mixture Color Model. IEEE Trans. Ind. Inform. 2017, 14, 1146–1154. [Google Scholar] [CrossRef]
  93. Rothermel, R.C.; Anderson, H.E. Fire Spread Characteristics Determined in the Laboratory; US Department of Agriculture: Ogden, UT, USA, 1966. Available online: https://www.fs.usda.gov/rm/pubs_int/int_rp030.pdf (accessed on 15 December 2024).
  94. Grishin, A.M. Mathematical Modeling of Forest Fires and New Methods of Fighting Them; Publishing House of the Tomsk State University: Tomsk, Russia, 1997. [Google Scholar]
  95. Rossi, J.-L.; Chetehouna, K.; Collin, A.; Moretti, B.; Balbi, J.-H. Simplified Flame Models and Prediction of the Thermal Radiation Emitted by a Flame Front in an Outdoor Fire. Combust. Sci. Technol. 2010, 182, 1457–1477. [Google Scholar] [CrossRef]
  96. Foggia, P.; Saggese, A.; Vento, M. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circ. Sys. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  97. Habiboğlu, Y.H.; Günay, O.; Çetin, A.E. Covariance matrix-based fire and flame detection method in video. Mach. Vis. Appl. 2012, 23, 1103–1113. [Google Scholar] [CrossRef]
  98. Mueller, M.; Karasev, P.; Kolesov, I.; Tannenbaum, A. Optical Flow Estimation for Flame Detection in Videos. IEEE Trans. Image Process. 2013, 22, 2786–2797. [Google Scholar] [CrossRef] [PubMed]
  99. Chi, R.; Lu, Z.-M.; Ji, Q.-G. Real-time multi-feature based fire flame detection in video. IET Image Process. 2017, 11, 31–37. [Google Scholar] [CrossRef]
  100. Görür, D.; Rasmussen, C.E. Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution. J. Comput. Sci. Technol. 2010, 25, 653–664. [Google Scholar] [CrossRef]
  101. Teh, Y.W.; Jordan, M.I.; Beal, M.J.; Blei, D.M. Hierarchical Dirichlet processes. J. Am. Stat. Assoc. 2006, 101, 1566–1581. Available online: https://www.jstor.org/stable/27639773 (accessed on 18 December 2024). [CrossRef]
  102. Borges, P.V.K.; Izquierdo, E. A Probabilistic Approach for Vision-Based Fire Detection in Videos. IEEE Trans. Circ. Sys. Video Techn. 2010, 20, 721–731. [Google Scholar] [CrossRef]
  103. Wang, D.-C.; Cui, X.; Park, E.; Jin, C.; Kim, H. Adaptive flame detection using randomness testing and robust features. Fire Saf. J. 2013, 55, 116–125. [Google Scholar] [CrossRef]
  104. Wald, A.; Wolfowitz, J. An Exact Test for Randomness in the Non-Parametric Case Based on Serial Correlation. Ann. Math. Stat. 1943, 14, 378–388. [Google Scholar] [CrossRef]
  105. Martínez-de Dios, J.R.; Merino, L.; Caballero, F.; Ollero, A. Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems. Sensors 2011, 11, 6328–6353. [Google Scholar] [CrossRef]
  106. Plucinski, M.P. A Review of Wildfire Occurrence Research; Bushfire Cooperative Research Centre: Melbourne, Australia, 2012. [Google Scholar]
  107. Pérez, Y.; Pastor, E.; Planas, E.; Plucinski, M.; Gould, J. Computing forest fires aerial suppression effectiveness by IR monitoring. Fire Saf. J. 2011, 46, 2–8. [Google Scholar] [CrossRef]
  108. Karimi, N. Response of a conical, laminar premixed flame to low amplitude acoustic forcing—A comparison between experiment and kinematic theories. Energy 2014, 78, 490–500. [Google Scholar] [CrossRef]
  109. DARPA. DARPA Demos Acoustics Suppression of Flame. Available online: https://www.youtube.com/watch?v=DanOeC2EpeA&t=9s (accessed on 20 December 2024).
  110. Im, H.G.; Law, C.K.; Axelbaum, R.L. Opening of the Burke-Schumann Flame Tip and the Effects of Curvature on Diffusion Flame Extinction. Proc. Combust. Inst. 1991, 23, 551–558. [Google Scholar] [CrossRef]
  111. Radomiak, H.; Mazur, M.; Zajemska, M.; Musiał, D. Gaszenie płomienia dyfuzyjnego przy pomocy fal akustycznych. Bezpieczeństwo Technol. Pożarnicza 2015, 40, 29–38. [Google Scholar] [CrossRef]
  112. Marek, M. Bayesian Regression Model Estimation: A Road Safety Aspect. In Proceedings of the International Conference on Smart City Applications SCA 2022, Castelo Branco, Portugal, 19–21 October 2022; Volume 5, pp. 163–175. [Google Scholar] [CrossRef]
  113. Marek, M. Wykorzystanie ekonometrycznego modelu klasycznej funkcji regresji liniowej do przeprowadzenia analiz ilościowych w naukach ekonomicznych. In Rola Informatyki w Naukach Ekonomicznych i Społecznych. Innowacje i Implikacje Interdyscyplinarne; Wydawnictwo Wyższej Szkoły Handlowej im. B. Markowskiego w Kielcach: Kielce, Poland, 2013. [Google Scholar]
  114. Wilk-Jakubowski, G.; Harabin, R.; Skoczek, T.; Wilk-Jakubowski, J. Preparation of the Police in the Field of Counter-terrorism in Opinions of the Independent Counter-terrorist Sub-division of the Regional Police Headquarters in Cracow. Slovak. J. Political Sci. 2022, 22, 174–208. [Google Scholar] [CrossRef]
  115. Marek, M. Aspects of Road Safety: A Case of Education by Research—Analysis of Parameters Affecting Accident. In Proceedings of the Education and Research in the Information Society Conference (ERIS), Plovdiv, Bulgaria, 27–28 September 2021; Available online: https://ceur-ws.org/Vol-3061/ERIS_2021-art07(reg).pdf (accessed on 20 December 2024).
  116. Chitade, A.Z.; Katiyar, S.K. Colour based image segmentation using k-means clustering. Int. J. Eng. Sci. Technol. 2010, 2, 5319–5325. Available online: https://www.oalib.com/research/2110040 (accessed on 22 December 2024).
  117. Chen, J.; He, Y.; Wang, J. Multi-feature fusion based fast video flame detection. Build. Environ. 2010, 45, 1113–1122. [Google Scholar] [CrossRef]
  118. Pan, H.; Badawi, D.; Zhang, X.; Çetin, A.E. Additive neural network for forest fire detection. Signal Image Video Process 2020, 14, 675–682. [Google Scholar] [CrossRef]
  119. Wilk-Jakubowski, G.; Harabin, R.; Ivanov, S. Robotics in crisis management: A review. Technol. Soc. 2022, 68, 101935. [Google Scholar] [CrossRef]
  120. Wilk-Jakubowski, G. Normative Dimension of Crisis Management System in the Third Republic of Poland in an International Context. Organizational and Economic Aspects; Wydawnictwo Społecznej Akademii Nauk: Łódź-Warszawa, Poland, 2019. [Google Scholar]
  121. San-Miguel-Ayanz, J.; Ravail, N. Active Fire Detection for Fire Emergency Management: Potential and Limitations for the Operational Use of Remote Sensing. Nat. Hazards 2005, 35, 361–376. [Google Scholar] [CrossRef]
  122. Scott, R.; Nowell, B. Networks and Crisis Management. Oxford Research Encyclopedia of Politics. 2020. Available online: https://oxfordre.com/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-1650 (accessed on 22 December 2024).
  123. Wilk-Jakubowski, J. Broadband satellite data networks in the context of available protocols and digital platforms. Inform. Autom. Pomiary Gospod. Ochr. Sr. 2021, 11, 56–60. [Google Scholar] [CrossRef]
  124. Wilk-Jakubowski, J. Overview of broadband information systems architecture for crisis management. Inform. Autom. Pomiary Gospod. Ochr. Sr. 2020, 10, 20–23. [Google Scholar] [CrossRef]
  125. Suematsu, N.; Oguma, H.; Eguchi, S.; Kameda, S.; Sasanuma, M.; Kuroda, K. Multi-mode SDR VSAT against big disasters. In Proceedings of the European Microwave Conference’13, Nuremberg, Germany, 6–10 October 2013; Available online: https://ieeexplore.ieee.org/document/6686788 (accessed on 28 December 2024).
  126. Azarenko, O.; Honcharenko, Y.; Divizinyuk, M.; Mirnenko, V.; Strilets, V.; Wilk-Jakubowski, J.L. Influence of anthropogenic factors on the solution of applied problems of recording language information in the open area. Soc. Dev. Secur. 2022, 12, 135–143. [Google Scholar] [CrossRef]
  127. Šerić, L.; Stipanicev, D.; Krstinić, D. ML/AI in Intelligent Forest Fire Observer Network. In Proceedings of the 3rd EAI International Conference on Management of Manufacturing Systems, Dubrovnik, Croatia, 6–8 November 2018. [Google Scholar] [CrossRef]
  128. Wilk-Jakubowski, J. Information systems engineering using VSAT networks. Yugosl. J. Oper. Res. 2021, 31, 409–428. [Google Scholar] [CrossRef]
  129. Azarenko, O.; Honcharenko, Y.; Divizinyuk, M.; Mirnenko, V.; Strilets, V.; Wilk-Jakubowski, J.L. The influence of air environment properties on the solution of applied problems of capturing speech information in the open terrain. Soc. Dev. Secur. 2022, 12, 64–77. [Google Scholar] [CrossRef]
  130. Zeng, L.; Zhang, C.; Qin, P.; Zhou, Y.; Cai, Y. One Method for Predicting Satellite Communication Terminal Service Demands Based on Artificial Intelligence Algorithms. Appl. Sci. 2024, 14, 6019. [Google Scholar] [CrossRef]
  131. Wilk-Jakubowski, J. Measuring Rain Rates Exceeding the Polish Average by 0.01%. Pol. J. Environ. Stud. 2018, 27, 383–390. [Google Scholar] [CrossRef] [PubMed]
  132. Negi, P.; Pathani, A.; Bhatt, B.C.; Swami, S.; Singh, R.; Gehlot, A.; Thakur, A.K.; Gupta, L.R.; Priyadarshi, N.; Twala, B.; et al. Integration of Industry 4.0 Technologies in Fire and Safety Management. Fire 2024, 7, 335. [Google Scholar] [CrossRef]
  133. Chen, Y.; Morton, D.C.; Randerson, J.T. Remote sensing for wildfire monitoring: Insights into burned area, emissions, and fire dynamics. ONE Earth 2024, 7, 1022–1028. [Google Scholar] [CrossRef]
  134. NASA. Earth Science—Applied Sciences. Monitoring Fires with Fast-Acting Data. Available online: https://appliedsciences.nasa.gov/our-impact/story/monitoring-fires-fast-acting-data (accessed on 28 December 2024).
  135. Wilk-Jakubowski, J. Total Signal Degradation of Polish 26-50 GHz Satellite Systems Due to Rain. Pol. J. Environ. Stud. 2018, 27, 397–402. [Google Scholar] [CrossRef]
  136. Wilk-Jakubowski, J. Predicting Satellite System Signal Degradation due to Rain in the Frequency Range of 1 to 25 GHz. Pol. J. Environ. Stud. 2018, 27, 391–396. [Google Scholar] [CrossRef]
  137. Stankov, S.; Ivanov, S. Intelligent Sensor For Fire Detection With Deep Neural Networks. J. Inf. Inn. Technol. JIIT 2020, 1, 25–28. Available online: https://journal.iiit.bg/wp-content/uploads/2020/08/4_INTELLIGENT-SENSOR-FOR-FIRE-DETECTION.pdf (accessed on 29 December 2024).
  138. Dirik, M. Fire extinguishers based on acoustic oscillations in airflow using fuzzy classification. J. Fuzzy Ext. Appl. 2023, 4, 217–234. Available online: https://www.journal-fea.com/article_175269_e5257a225a450ed0d7840f6368c55f60.pdf (accessed on 29 December 2024).
  139. Yadav, R.; Shirazi, R.; Choudhary, A.; Yadav, S.; Raghuvanshi, R. Designing of Fire Extinguisher Based on Sound Waves. Int. J. Eng. Adv. Technol. 2020, 9, 927–930. Available online: https://www.ijeat.org/wp-content/uploads/papers/v9i4/D7301049420.pdf (accessed on 29 December 2024). [CrossRef]
  140. Fegade, R.; Rai, K.; Dalvi, S. Extinguishing Fire Using Low Frequency Sound from Subwoofer. Gradiva Rev. J. 2022, 8, 708–713. [Google Scholar]
  141. Wilk-Jakubowski, J.L. Acoustic firefighting method on the basis of European research: A review. Akustika 2023, 46, 31–45. [Google Scholar] [CrossRef]
  142. Koklu, M.; Taspinar, Y.S. Determining the extinguishing status of fuel flames with sound wave by machine learning methods. IEEE Access 2021, 9, 207–216. [Google Scholar] [CrossRef]
  143. Wilk-Jakubowski, J. Experimental Investigation of Amplitude-Modulated Waves for Flame Extinguishing: A Case of Acoustic Environmentally Friendly Technology. Environ. Clim. Technol. 2023, 27, 627–638. [Google Scholar] [CrossRef]
  144. Loboichenko, V.; Wilk-Jakubowski, G.; Wilk-Jakubowski, J.L.; Ciosmak, J. Application of Low-Frequency Acoustic Waves to Extinguish Flames on the Basis of Selected Experimental Attempts. Appl. Sci. 2024, 14, 8872. [Google Scholar] [CrossRef]
  145. Wilk-Jakubowski, J.; Wilk-Jakubowski, G.; Loboichenko, V. Experimental Attempts of Using Modulated and Unmodulated Waves in Low-Frequency Acoustic Wave Flame Extinguishing Technology: A Review of Selected Cases. Stroj. Vestn. J. Mech. Eng. 2024, 70, 270–281. [Google Scholar] [CrossRef]
  146. Wilk-Jakubowski, J.L. Experimental Study of the Influence of Even Harmonics on Flame Extinguishing by Low-Frequency Acoustic Waves with the Use of High-Power Extinguisher. Appl. Sci. 2024, 14, 11809. [Google Scholar] [CrossRef]
  147. Jin, J.; Kim, S.; Moon, J. Development of a Firefighting Drone for Constructing Fire-breaks to Suppress Nascent Low-Intensity Fires. Appl. Sci. 2024, 14, 1652. [Google Scholar] [CrossRef]
  148. Future Content. Fire Retardant Material—A History. Available online: https://specialistworkclothing.wordpress.com/2014/03/05/fire-retardant-material-a-history (accessed on 30 December 2024).
  149. LeVan, S.L. Chemistry of Fire Retardancy; U.S. Department of Agriculture, Forest Service, Forest Products Laboratory: Madison, WI, USA, 1984; pp. 531–574.
  150. Salasinska, K.; Mizera, K.; Celiński, M.; Kozikowski, P.; Mirowski, J.; Gajek, A. Thermal properties and fire behavior of a flexible poly(vinyl chloride) modified with complex of 3-aminotriazole with zinc phosphate. Fire Saf. J. 2021, 122, 103326. [Google Scholar] [CrossRef]
  151. Rabajczyk, A.; Zielecka, M.; Popielarczyk, T.; Sowa, T. Nanotechnology in Fire Protection—Application and Requirements. Materials 2021, 14, 7849. [Google Scholar] [CrossRef] [PubMed]
  152. Bras, M.L.; Wilkie, C.A.; Bourbigot, S. Fire Retardancy of Polymers-New Applications of Mineral Fillers; The Royal Society of Chemistry: Sawston, UK, 2005; pp. 4–6. [Google Scholar]
  153. Rabajczyk, A.; Zielecka, M.; Gniazdowska, J. Application of Nanotechnology in Extinguishing Agents. Materials 2022, 15, 8876. [Google Scholar] [CrossRef] [PubMed]
  154. Wicklein, B.; Kocjan, D.; Carosio, F.; Camino, G.; Bergström, L. Tuning the nanocellulose–borate interaction to achieve highly flame retardant hybrid materials. Chem. Mater. 2016, 28, 1985–1989. [Google Scholar] [CrossRef]
Figure 1. (a) A functional scheme of the proposed model (for the g-level), where t is time, I is the intensity of the acoustic wave, and 1, 2, 3, 4, 5 are the numbers of probing pulses; and (b) an illustration of the levels at which sensors can be placed in the model (based on Equation (16)).
Figure 1. (a) A functional scheme of the proposed model (for the g-level), where t is time, I is the intensity of the acoustic wave, and 1, 2, 3, 4, 5 are the numbers of probing pulses; and (b) an illustration of the levels at which sensors can be placed in the model (based on Equation (16)).
Sensors 25 00935 g001
Figure 2. Examples of pixels used for pixel learning. The following categories may be listed: 1—red with smoke; 2—red with no smoke; 3—orange with smoke; 4—orange with no smoke; 5—yellow-white with smoke; 6—yellow-white with no smoke; and pixels: 7—low brightness; 8—medium brightness; and 9—high brightness. The RGB values of the pixels corresponding to the colors of fires can vary across different systems.
Figure 2. Examples of pixels used for pixel learning. The following categories may be listed: 1—red with smoke; 2—red with no smoke; 3—orange with smoke; 4—orange with no smoke; 5—yellow-white with smoke; 6—yellow-white with no smoke; and pixels: 7—low brightness; 8—medium brightness; and 9—high brightness. The RGB values of the pixels corresponding to the colors of fires can vary across different systems.
Sensors 25 00935 g002
Figure 3. (a) An example of the hardware of the system (overview); and (b) an experimental sensor for fire detection developed by the authors.
Figure 3. (a) An example of the hardware of the system (overview); and (b) an experimental sensor for fire detection developed by the authors.
Sensors 25 00935 g003
Table 1. Properties for fire detection techniques.
Table 1. Properties for fire detection techniques.
PropertiesList of Publications
Color[66,78,86]
Texture[97,99]
Shape[87,96,103]
Flickering properties[68,92]
Dynamics[68,86,97,98]
Combined features[61,62,63,64,99]
Processing techniques
Table 2. Two basic types of models.
Table 2. Two basic types of models.
Type of ModelDescription
Empirical inequality
models
The use of these models is based on empirical inequality (there are experimental thresholds). They work well in detecting real flame pixels, but not in filtering out noise.
Statistical
models
The use of these models is based on models trained with real data. The effectiveness is higher if an appropriate model trained with sufficient data is used (the number of mixture components is not known in advance).
Table 3. Application examples of color spaces used in roles.
Table 3. Application examples of color spaces used in roles.
Color Space Used
for Detection
AuthorsList of
Publications
RGBW. Phillips III, M. Shah, and N. da Vitoria Lobo; B. U. Töreyin, Y. Dedeoğlu, U. Güdükbay, and A. E. Çetin; T. Çelik, H. Demirel, H. Ozkaramanli, and M. Uyguroglu; B. C. Ko, K-H. Cheong, and J-Y. Nam; J-F. Collumeau, H. Laurent, A. Hafiane, and K. Chetehouna[68,74,75,76,84]
YCbCrT. Toulouse, L. Rossi, M. Akhloufi, T. Çelik, and X. Maldague; T. Çelik and H. Demirel[60,66]
HISW-B. Horng, J-W. Peng, and C-Y. Chen[90]
HSVC-B. Liu and N. Ahuja[87]
YUVG. Marbach, M. Loepfe, and T. Brupbacher[88]
L*a*b*T. Çelik; A. Z. Chitade and S. K. Katiyar[89,116]
Other,
combinations of color
spaces, roles
T. Chen, P. Wu, and Y. Chiou; J. Chen, Y. He, and J. Wang; L. Rossi, M. Akhloufi, and Y. Tison; S. Rudz, K. Chetehouna, A. Hafiane, H. Laurent, and O. Séro-Guillaume[58,78,85,91,117]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wilk-Jakubowski, J.L.; Loboichenko, V.; Divizinyuk, M.; Wilk-Jakubowski, G.; Shevchenko, R.; Ivanov, S.; Strelets, V. Acoustic Waves and Their Application in Modern Fire Detection Using Artificial Vision Systems: A Review. Sensors 2025, 25, 935. https://doi.org/10.3390/s25030935

AMA Style

Wilk-Jakubowski JL, Loboichenko V, Divizinyuk M, Wilk-Jakubowski G, Shevchenko R, Ivanov S, Strelets V. Acoustic Waves and Their Application in Modern Fire Detection Using Artificial Vision Systems: A Review. Sensors. 2025; 25(3):935. https://doi.org/10.3390/s25030935

Chicago/Turabian Style

Wilk-Jakubowski, Jacek Lukasz, Valentyna Loboichenko, Mikhail Divizinyuk, Grzegorz Wilk-Jakubowski, Roman Shevchenko, Stefan Ivanov, and Viktor Strelets. 2025. "Acoustic Waves and Their Application in Modern Fire Detection Using Artificial Vision Systems: A Review" Sensors 25, no. 3: 935. https://doi.org/10.3390/s25030935

APA Style

Wilk-Jakubowski, J. L., Loboichenko, V., Divizinyuk, M., Wilk-Jakubowski, G., Shevchenko, R., Ivanov, S., & Strelets, V. (2025). Acoustic Waves and Their Application in Modern Fire Detection Using Artificial Vision Systems: A Review. Sensors, 25(3), 935. https://doi.org/10.3390/s25030935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop