Next Article in Journal
Experimental Study on Effects of Lateral Spacing on Flame Propagation over Solid Fuel Matrix
Previous Article in Journal
Climate Extremes, Vegetation, and Lightning: Regional Fire Drivers Across Eurasia and North America
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods

1
Department of Equipment and Fire Protection Engineering, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea
2
Department of Mechanical Engineering, Major of Equipment and Fire Protection Engineering, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Fire 2025, 8(7), 283; https://doi.org/10.3390/fire8070283
Submission received: 11 June 2025 / Revised: 10 July 2025 / Accepted: 16 July 2025 / Published: 18 July 2025

Abstract

Heat release rate (HRR) is a key indicator for characterizing fire behavior, and it is conventionally measured under laboratory conditions. However, this measurement is limited in its widespread application to various fire conditions, due to its high cost, operational complexity, and lack of real-time predictive capability. Therefore, this study proposes an image-based HRR prediction model that uses deep learning and image processing techniques. The flame region in a fire video was segmented using the YOLO-YCbCr model, which integrates YCbCr color-space-based segmentation with YOLO object detection. For comparative analysis, the YOLO segmentation model was used. Furthermore, the fire diameter and flame height were determined from the spatial information of the segmented flame, and the HRR was predicted based on the correlation between flame size and HRR. The proposed models were applied to various experimental fire videos, and their prediction performances were quantitatively assessed. The results indicated that the proposed models accurately captured the HRR variations over time, and applying the average flame height calculation enhanced the prediction performance by reducing fluctuations in the predicted HRR. These findings demonstrate that the image-based HRR prediction model can be used to estimate real-time HRR values in diverse fire environments.

1. Introduction

Heat release rate (HRR) is one of the most critical parameters for analyzing fire behavior, and it is a key indicator of fire’s scale and growth [1,2,3,4]. Furthermore, HRR is a crucial factor in fire risk assessment and in the development of firefighting and rescue strategies, directly affecting the effectiveness of fire suppression and rescue operations [5,6,7]. The HRR of various combustible materials is typically measured under laboratory settings using various cone calorimeters, and several studies have focused on improving the reliability of these measurements [8,9,10]. Although this approach can provide precise HRR measurements, the significant temporal, spatial, and technical resource requirements limit its widespread use in various fire scenarios. Therefore, many studies on HRR prediction using various approaches have been conducted to overcome the aforementioned limitations and facilitate more accessible HRR predictions [11,12].
Previous research has proposed various empirical correlations to predict HRR by analyzing relationships between combustion characteristics and HRR. Heskestad [13] developed a general correlation to predict the mean flame height of buoyant turbulent flames based on the relationship between flame characteristics and heat release parameters. This correlation was validated across a wide range of fire conditions and has since been widely used to estimate flame heights in various fire scenarios. Based on various fire experiments, Zukoski [14] developed a correlation expressing the ratio of flame height to fire diameter as a function of the non-dimensional heat release rate. This formulation has proven robust across a broad range of fire conditions. Furthermore, Ma et al. [15] developed a modified empirical correlation to estimate the HRR in pool fires under various pressure conditions. This study focused on n-heptane pool fires in a ventilated altitude chamber and highlighted the significant effect of ambient pressure on both the mass burning rate and the HRR. By incorporating a pressure correction factor into the oxygen-consumption-based HRR calculation, the results revealed that the HRR markedly increases with pressure. Huang et al. [16] developed an empirical correlation to estimate the HRR in large-scale cable fires based on the relationship between cable composition and combustion characteristics. This study demonstrated a significant improvement in HRR prediction accuracy in complex fire scenarios by considering the physical and chemical properties of cables and their burning behavior. Tan et al. [17] proposed an HRR prediction correlation for designing fire sources to evaluate the effects of design parameters on the tunnel fire temperature field. Fire experiments were conducted using oil-filled square burners of various sizes, and an empirical correlation between HRR and burner area was derived.
Despite advancements in empirical correlations for HRR prediction, these methods require experimentally measured information such as fuel properties, fire conditions, and flame characteristics, which limit their practical application. Consequently, artificial intelligence (AI) models have attracted increasing attention for predicting fire-related characteristics. However, most AI-based studies have focused on fire detection and segmentation for smart fire protection [18,19,20,21]. Yan et al. [22] proposed a You Only Look Once model (YOLOv5) with coordinate attention, Swin Transformers, and feature fusion (YOLOv5-CSF); this is an improved fire detection model based on the YOLOv5 architecture. YOLOv5-CSF integrates coordinate attention blocks, Swin Transformer modules, and adaptive spatial feature fusion, enhancing the detection performance of the model for small and complex flames. This model outperformed traditional models in terms of the mean average precision. Li et al. [23] proposed a fire and smoke detection model based on a detection transformer framework enhanced by a convolutional neural network backbone. They incorporated a normalization-based attention module during feature extraction and adopted a multiscale deformable attention mechanism in the encoder–decoder structure to improve the detection of small-scale flames and smoke. Their proposed model achieved average precisions of 81.7% and 76.0% for flames and smoke, respectively. Song et al. [24] developed a lightweight fire segmentation model by integrating depth-wise separable convolutions and a novel confusion block architecture into a modified FusionNet. Their model achieved high segmentation accuracy and reduced computational costs; the results indicated that the intersection over union (IoU) scores were 0.79 and 0.91 on the FiSmo and Corsican fire datasets, respectively. Choi et al. [25] developed a semantic fire segmentation model for outdoor images by improving the FusionNet architecture with input and output convolution layers and middle skip connections, resulting in a more precise pixel-level fire detection. Carmignani [26] developed a “flame tracker” model using Python; this model automatically analyzes flame characteristics such as size, color, and temperature by extracting hue variations from flame images and videos in the RGB color space. This method highlights the potential of flame imagery as a versatile tool for HRR prediction in diverse fire scenarios.
Recently, the prediction of transient HRR using AI models has gained increasing attention, and a limited number of studies have begun to explore this area [27,28]. Wang et al. [29] proposed a deep learning-based HRR prediction model by utilizing the VGG16 architecture designed for use in real fire environments, where conventional HRR measurement methods cannot be easily applied. The model was validated using both laboratory and real-world fire data, achieving a coefficient of determination R2 of 0.83 for low-HRR fires and an R2 of 0.68 for high-HRR fires. Xu et al. [30] introduced a deep learning model to predict transient HRR using flame imagery and a hybrid Att-BiLSTM network. This model utilized attention mechanisms and bidirectional LSTM layers for capturing key temporal features from fire image sequences, which enabled accurate HRR forecasting, with an R2 of 0.997 for the validation datasets. Presad [31] introduced a recurrent neural network (RNN)-based HRR prediction model using fire video data, demonstrating an improved predictive performance compared to that of traditional correlations, such as the Heskestad equation. The RNN model learned the temporal dynamics of flame patterns, capturing the evolving nature of fires and improving HRR estimation. The model was evaluated using fire video data from wood pallet and cardboard box fire experiments; the model yielded a mean absolute error (MAE) of around 115 kW, indicating robust prediction performance across diverse fire environments.
A review of the existing literature indicates that many studies have focused on accurately measuring the HRR of various combustibles in laboratory environments using experimental equipment to improve measurement precision and reliability. Prior research on AI models for predicting fire characteristics has primarily focused on detecting and segmenting flames and smoke from fire images. However, a limited number of studies have explored HRR prediction using sequence modeling approaches. While such models demonstrate promising real-time prediction performance, they typically require large-scale paired datasets of fire image sequences and corresponding HRR values for training. Moreover, purely data-driven models are limited in identifying the underlying physical behavior of flames. Thus, this study proposes a real-time HRR prediction model that predicts the HRR from fire images by extracting flame features such as flame height and fire diameter using a deep learning- and image processing-based segmentation technique. Without the need for paired HRR datasets, the proposed model enables HRR estimation by segmenting the flame regions, extracting the fire diameter and flame height, and applying the Zukoski correlation to estimate HRR. For the flame segmentation component, a novel YOLO-YCbCr model [32] was introduced, which integrates YCbCr color-space-based rule segmentation with deep learning-based object detection for enhanced efficiency and robustness. For comparative analysis, a deep learning-based segmentation model (YOLOv8 segmentation model) [33] with the original architecture was adopted and fine-tuned for flame segmentation using the authors’ fire image dataset. The subsequent HRR prediction steps, flame size extraction and HRR estimation, were conducted using the same process as in the proposed model. Furthermore, real-time HRR prediction was conducted using six fire experiment videos provided by the National Institute of Standards and Technology (NIST), and the prediction performance was quantitatively assessed. In addition, the effect of incorporating average flame height calculations into the prediction model was analyzed to evaluate its impact on HRR prediction performance. Finally, the performance of the proposed models was compared with that of an existing sequence modeling-based AI-HRR prediction model.

2. Materials and Methods

2.1. Image-Based Fire Heat Release Rate Prediction Methods

A real-time image-based HRR prediction method is proposed in this study, using deep learning and image processing techniques, which can estimate HRR values by identifying flame characteristics. The proposed method involves (1) detecting and segmenting flame regions from fire images or videos, (2) extracting fire diameter and flame height information, and (3) predicting the HRR in real time based on the established correlation between flame size and HRR. As the core component of the proposed method, the YOLO-YCbCr model is proposed for flame segmentation, which integrates YCbCr color-space image processing with deep learning-based object detection (YOLOv8) to enable lightweight training and efficient flame segmentation. For comparison purposes, the YOLOv8 segmentation (YOLO-seg) model was selected as a representative deep learning segmentation model, which typically offers fast inference and high prediction accuracy [33].

2.1.1. YOLO-YCbCr Segmentation-Model-Based HRR Prediction

The core component of the proposed HRR prediction method is accurate segmentation of flame regions from fire imagery. While deep learning-based segmentation models can achieve high performance, they typically require pixel- or polygon-level annotations and incur high computational costs. To address these limitations, our research team previously proposed the YOLO-YCbCr segmentation model [32], which combines efficient YOLO object detection with lightweight rule-based segmentation in the YCbCr color space. In this model, segmentation is applied only to flame-containing regions identified through object detection. This proposed segmentation reduces training costs by eliminating the need for complex mask annotations, reducing background noise, and lowering the computational load. In this study, an improved YOLO-YCbCr model was used, in which the segmentation rules were refined and the YOLO detector was retrained with an expanded fire image dataset.
Figure 1 illustrates the architecture of the YOLO-YCbCr segmentation model employed in this study. The model consists of two main stages: object detection and rule-based color segmentation. First, YOLOv8 is used to detect bounding boxes surrounding flame regions, effectively acting as a spatial filter. This allows the subsequent color-based segmentation to be performed only within the detected bounding boxes. The cropped regions are then converted into the YCbCr color space, which is suitable for flame segmentation [34], and the flame region is segmented using the YCbCr rules. Based on an analysis of flame image characteristics observed under various environments and the segmentation results of the YCbCr model proposed by Celik and Demirel [35], it was found that the flame segmentation performance varies significantly depending on the presence of objects with flame-like colors and the color characteristics of flames as influenced by fire conditions and image acquisition environments. Therefore, the YCbCr rules were proposed to account for such variations, enabling the detection of flame regions by categorizing them into red-dominant and yellow-dominant flames.
As shown in Figure 1, the proposed YCbCr rules consist of the following components: Rule 1, which classifies the dominant flame color (red or yellow); Rule 2, which detects red-dominant flames; and Rule 3, which detects yellow-dominant flames. In Rule 1, bounding boxes obtained from YOLO detection are analyzed to classify the enclosed flame regions as either red-dominant or yellow-dominant based on their average luminance and overall image brightness. This classification is important, as each flame type exhibits different color distribution patterns that necessitate different segmentation strategies. When the average luminance within the bounding box exceeds a predefined threshold, the flame is classified as yellow-dominant, which typically exhibits high luminance. However, in environments with strong ambient lighting, such as outdoor scenes or artificial illumination, even red flames can exhibit elevated luminance, potentially leading to misclassification. To enhance robustness under such lighting variations, the global average luminance of the image is also considered. The thresholds were empirically determined via statistical performance analysis on a fire dataset consisting of 1000 images, resulting in τ w h i t e = 130 and τ d a y = 90 . For flames classified as red-dominant, Rule 2 is applied. This stage consists of two sub-rules: Rule 2-1 builds upon the classical YCbCr method proposed by Celik and Demirel [35], identifying flame pixels as those with high Cr, low Cb, and high Y values. The threshold, τ = 35 , ensures sufficient color contrast to differentiate flames from the background. Rule 2-2 serves as a complementary condition to detect yellow-white or white flame regions that often appear near the flame boundaries or in high-temperature zones within red flames. This rule comprises two distinct conditions: the first targets desaturated but high-luminance white flames, and τ R . E = 2 is a narrow chrominance difference threshold tailored for detecting nearly achromatic flames. The second condition focuses on the segmentation of yellow-white flames with low Cr intensity, because these light-toned boundary flames are often undetectable by Cr-centric approaches. For flames classified as yellow-dominant in Rule 1, Rule 3 is employed. Yellow-white or nearly white flames under low-light conditions often appear with high luminance but low chrominance contrast due to camera auto-exposure or light reflections. Rule 3-1 identifies regions with a low Cr–Cb difference, suitable for detecting visually flat but bright flames. Rule 3-2 targets low-Cr but high-luminance pixels, often corresponding to yellow-white flames in bright areas.
As shown in Figure 2, the proposed HRR prediction procedure consists of two key phases: (1) flame segmentation and (2) HRR estimation. In the segmentation phase, the YOLO-YCbCr segmentation model is applied to mask the flame regions from fire images or video frames. In the HRR estimation phase, the spatial information from the segmented flame is obtained to calculate the flame height and fire diameter. The flame height is calculated as the pixel distance between the topmost and bottommost positions along the vertical (y-) axis within the segmented flame region, where each position is defined as a horizontal line (y-value) containing at least five flame pixels. This criterion is applied to reduce the influence of segmentation noise and enhance robustness in flame height estimation. This method reflects the maximum visible flame extension in each frame and is suitable for detecting transient flame fluctuations. The fire diameter is defined as the horizontal distance between the leftmost and rightmost x-coordinates of the outermost flame pixels. To improve robustness against irregular flame shapes at the top, this calculation is conducted only within the bottom 30% of the flame region. This region typically reflects the stable combustion zone and provides more consistent diameter estimates. The pixel-based distances are converted to physical units by using a scaling factor derived from the known physical size of the combustible material in each experiment. Finally, the calculated flame sizes are used to estimate the HRR values by applying the Zukoski correlation [14], which establishes a quantitative relationship between flame size and HRR, as follows:
z f D = 3.3 Q D 2 / 3
where z f and D are the flame height and fire diameter, respectively. Q D is the non-dimensional heat release rate and is defined as follows:
Q D = Q ˙ ρ c p T g D D 2
where ρ , c p , T , g , and Q ˙   represent the ambient density, specific heat of air at constant pressure, ambient temperature, gravitational acceleration, and heat release rate, respectively.

2.1.2. YOLO Segmentation-Model-Based HRR Prediction

To provide a baseline for comparison with the proposed YOLO-YCbCr-based HRR prediction method, a deep learning-based approach using the YOLOv8 segmentation model (YOLO-seg) [33] was also implemented. Except for the segmentation component, the overall HRR prediction process (flame size extraction and HRR estimation) follows the same procedure as described for the YOLO-YCbCr-based HRR prediction model. As illustrated in Figure 3, the architecture of the YOLO-seg model includes three primary components, namely, backbone, neck, and output, each designed to optimize real-time object segmentation performance. The backbone is responsible for extracting hierarchical features using 3 × 3 convolution layers, C2f modules, and a spatial pyramid pooling fast (SPPF) module. This architecture replaces the C3 module used in previous YOLO architectures with a C2f structure, improving gradient propagation and reducing computational overhead, thereby enhancing the training efficiency. The SPPF module maintains its role in aggregating multiscale spatial information, enabling the model to effectively capture objects of varying sizes. The extracted features are further processed in the neck through concatenation operations and upsampling, reinforcing multiscale feature fusion. Unlike earlier YOLO models, the YOLO-seg model reduces the number of 1 × 1 convolution layers in the feature fusion process, enabling a more direct integration of multiscale features. This approach not only reduces unnecessary computations but also better preserves spatial information. Finally, in the output stage, the model introduces a mask head to predict object boundaries at the pixel level, enabling detailed segmentation. Through these architectural enhancements, the YOLO-seg model maintains high segmentation accuracy with the high computational efficiency of YOLO-based models, which makes it suitable for real-time video analysis applications that require simultaneous object detection and segmentation.

2.2. Performance Evaluation Method

The performance of the proposed HRR prediction method was evaluated using experimental fire data released from the NIST fire calorimetry database [37]. Six different fire experiment datasets were selected based on fuel type and fire scale to ensure a diverse evaluation. Figure 4 shows that the selected datasets include two box fire experiments (one and four boxes filled with paper), two wood-crib fire experiments (one and four wood cribs), and two corn oil pool fire experiments (corn oil in 20 cm Calphalon® and aluminum pans). Each video of the fire experiments was converted into a frame-based image dataset. The final dataset included 191, 463, 539, 713, 35, and 37 frames for the one box, four boxes, one wood crib, four wood cribs, corn oil (Calphalon® pan), and corn oil (aluminum pan) fires, respectively. Although the NIST fire dataset used in this study provides experimentally validated HRR data and is widely adopted in fire research, the limited number of image frames may constrain the evaluation of the model’s performance in predicting transient fire HRR behavior. In specific cases with few frames, such as the corn oil fire experiments, the evaluation of the overall HRR trend prediction remains feasible; however, the reduced temporal resolution may hinder accurate assessment of the model’s performance in capturing transient HRR behavior.
The proposed method estimates the HRR by detecting flame regions in the images and extracting key frame parameters such as fire diameter and flame height. Therefore, two aspects were evaluated: (1) flame segmentation performance, which is essential for accurately masking the flame region, and (2) HRR prediction accuracy based on the extracted flame dimensions. The IoU metric, which measures the similarity between the predicted and actual flame regions, was used to assess the flame segmentation performance. The IoU is defined as follows:
I o U [ % ] = P i G i P i G i × 100
where P i and G i represent the predicted flame region in the ith image and actual flame region in the image, respectively.
The MAE, which provides an intuitive measure of the difference between the predicted and actual HRR values, is used to evaluate the HRR prediction performance. A lower MAE indicates higher accuracy. The MAE is calculated as follows:
M A E = 1 N i = 1 N y i y ^ i
where y i , y ^ i , and N represent the actual HRR, predicted HRR, and total number of image frames, respectively.
In addition, R2 is used to assess the correlation between the predicted and actual HRR values. It is determined as follows:
R 2 = 1 i = 1 N y i y ^ i 2 i = 1 N y i y ¯ i 2
where y ¯ i represents the mean of the actual HRR values.

3. Results and Discussion

3.1. Training and Test Results for Deep Learning Models

The deep learning models YOLOv8 and YOLOv8-seg were used to effectively detect the flame objects and segment the flame regions, respectively. A dataset with 30,500 images was constructed for training and validation; representative images from the dataset are shown in Figure 5a. The dataset included fire images from various scenarios (such as building, forest, outdoor, and indoor fires) and non-fire images (such as various normal scenes, sunsets, night scenes, and artificial lighting) to enhance the robustness of the model. As illustrated in Figure 5b, annotation of flame objects was required for all training data. Therefore, bounding-box annotations were applied for the object detection model (YOLOv8), and polygon-based pixel-level annotations were applied for the segmentation model (YOLOv8-seg). Transfer learning was conducted to effectively optimize the deep learning models for fire applications. Furthermore, hyperparameter tuning was performed, resulting in optimized YOLOv8 and YOLOv8-seg models with the best prediction accuracy.
A test dataset comprising 1000 images was constructed to evaluate the flame detection performance of YOLOv8, which served as the base object detection model used in the YOLO-YCbCr model. This dataset included both fire and non-fire scenes from various environments. The evaluation results showed that the model achieved a precision (ratio of correctly detected flames to all detected flames) of 90.7% and a recall (ratio of correctly detected flames to all actual flames) of 90.2%. The mAP was 95.6%, indicating a high level of accuracy in flame detection. Before applying the HRR prediction model, the segmentation accuracy and robustness of the YOLO-YCbCr and YOLO-seg models were evaluated using the same test dataset. As shown in Figure 6, the mean IoU values were 64.4% and 72.5% for the YOLO-YCbCr and YOLO-seg models, respectively, indicating that the YOLO-seg model achieved approximately 8.1% higher segmentation accuracy. Analyzing the flame segmentation results from the two models revealed differences in their segmentation characteristics and performance based on the size and shape of the flame. For large and clear flames, the IoU value was greater for the YOLO-seg model than for the YOLO-YCbCr model. This can be attributed to the YOLO-seg model’s characteristic of segmenting irregular and complex object shapes in a simple and blunt manner. Consequently, the YOLO-seg model overestimated the actual flame area (ground truth), which led to fewer undetected flame regions and, therefore, relatively higher IoU values. In contrast, for small and complex-shaped flames, the YOLO-YCbCr model exhibited excellent segmentation performance due to its pixel-level segmentation of flame regions using color-based rules. Therefore, in fire images with highly irregular and complex flame shapes, the YOLO-YCbCr model provided more precise flame segmentation than the YOLO-seg model. However, there are limitations to perfectly segmenting flame regions that have a wide range of colors using color-based rules. These limitations result in certain flame pixels being inadequately segmented, thereby leading to relatively low IoU values for the YOLO-YCbCr model.

3.2. Evaluation of Flame Segmentation Performance

The flame segmentation performance of the models used in this study was evaluated using six fire experiment videos provided by the NIST before predicting the HRR from the fire videos. Flame characteristics affect the segmentation performance; therefore, three fire periods in each fire video were defined: growth period (0–75% of peak HRR), fully developed period (≥75% of peak HRR), and decay period (post-peak HRR, <75% of peak HRR). The segmentation performance of each model was quantitatively assessed during these fire periods using the IoU metric. As representative segmentation results of the box fire experiments, Figure 7a shows the flame segmentation results from the one-box fire video, and Figure 7b,c show those for the YOLO-YCbCr and YOLO-seg models, respectively. The IoU values for the YOLO-YCbCr model during the growth, fully developed, and decay periods were 32.6%, 54.4%, and 39.7%, respectively, whereas those for the YOLO-seg model were 28.8%, 58.8%, and 35.6%, respectively. These results confirm that the YOLO-YCbCr model outperformed the YOLO-seg model during the growth and decay phases. This greater performance can be attributed to the YOLO-YCbCr model’s ability to differentiate flames more effectively by utilizing pixel-based color information in fire images, including small and complex flames. In contrast, during the fully developed period, the YOLO-seg model had better segmentation performance than the YOLO-YCbCr model, due to its strong segmentation capability for larger flames with clearer boundaries. In the four-boxes fire video, the IoU values for the YOLO-YCbCr model during the growth, fully developed, and decay periods were 35.2%, 66.1%, and 43.0%, respectively, whereas those for the YOLO-seg model were 30.2%, 69.4%, and 33.6%, respectively.
Figure 8 shows the flame segmentation results from the one-wood-crib fire video, selected as a representative segmentation result for the wood-crib fire experiments. The IoU values for the YOLO-YCbCr model across the growth, fully developed, and decay periods were 59.6%, 56.7%, and 42.4%, respectively, whereas the YOLO-seg model achieved 48.9%, 66.1%, and 33.3%, respectively. These results follow a similar trend to the box fire segmentation results, with the YOLO-YCbCr model performing better in the growth and decay periods and the YOLO-seg model showing relatively higher accuracy in the fully developed period. In the four-wood-cribs fire video, the IoU values for the YOLO-YCbCr model during the growth, fully developed, and decay periods were 57.1%, 71.8%, and 48.7%, respectively, whereas the YOLO-seg model achieved 54.6%, 73.7%, and 47.3%, respectively.
As a representative segmentation result of the corn oil pool fire experiments, the segmentation results for the corn oil pool fire in the Calphalon® pan, illustrated in Figure 9, indicate that the YOLO-YCbCr model achieved mean IoU values of 67.5%, 68.5%, and 72.8% during the growth, fully developed, and decay periods, respectively. In comparison, the YOLO-seg model achieved values of 48.6%, 68.1%, and 59.4% for the corresponding periods. The relatively more consistent flame color in the corn oil pool fire enhanced the flame segmentation performance throughout the entire fire period compared to that for the solid-fuel fires. In the corn oil pool fire in the aluminum pan, the IoU values for the YOLO-YCbCr model during the growth, fully developed, and decay periods were 58.0%, 56.8%, and 59.7%, respectively, whereas those for the YOLO-seg model were 53.4%, 68.2%, and 61.6%, respectively. The overall flame segmentation results revealed that the YOLO-YCbCr model was more effective at detecting irregular and complex-shaped flames, whereas the YOLO-seg model was better suited for detecting large and structured flame patterns.
The YOLO-YCbCr model uses a pixel-based segmentation approach that leverages flame color data, enabling it to accurately segment small and complex-shaped flames. However, this model is limited in accurately segmenting flames that exhibit a wide range of colors. In contrast, the YOLO-seg model is based on the fundamental structure of YOLO, which is optimized for real-time detection; however, it tends to focus on capturing the overall outline of the flame rather than performing pixel-level precise object segmentation, simplifying complex flame patterns and predicting them more broadly.

3.3. Evaluation of Fire Heat Release Rate Prediction Performance

A real-time HRR prediction was conducted using the six fire experiment videos provided by the NIST. The predicted HRR values were compared with experimental values to evaluate the performance of the HRR prediction models. Figure 10 shows the HRR prediction results of the box fire experiments. In the one-box fire experiment shown in Figure 10a, the MAE values for the YOLO-YCbCr and YOLO-seg models were 37.99 kW and 47.27 kW, respectively. Both models exhibited good tracking of the experimental HRR variations over time. Furthermore, the R2 values were 0.83 and 0.84 for the YOLO-YCbCr and YOLO-seg models, respectively, indicating good agreement with the experimental data. However, the real-time HRR values predicted by both models fluctuated considerably compared to the experimental results. This is because the fire experiment video provided discontinuous and limited frames, which included intermittent flames, thereby causing large fluctuations in the HRR values. Figure 10b presents the results of the four-boxes fire experiment. The MAE values for the YOLO-YCbCr and YOLO-seg models were 80.33 kW and 87.38 kW, respectively. Both models underpredicted the HRR values near the peak HRR, which could be attributed to unburned parts outside the combustibles obscuring the flames and leading to an underestimated flame size. While the YOLO-seg model achieved an R2 value of 0.89, indicating high predictive accuracy, the YOLO-YCbCr model exhibited a lower R2 value of 0.74. This performance degradation was attributed to the occurrence of various colored flames, which led to reduced segmentation performance of the color-based YCbCr rules and, consequently, resulted in more variations in flame height estimation.
Figure 11 presents the HRR prediction results of the wood-fire experiments. In the one-wood-crib fire experiment shown in Figure 11a, the MAE values for the YOLO-YCbCr and YOLO-seg models were 65.92 kW and 60.13 kW, respectively. The HRR values predicted by both models were underestimated during the decay phase compared to the experimental HRR values. This discrepancy was attributed to the attenuation of the visible flame during this phase, caused by combustion occurring near or within the fuel surface. Accurate flame region prediction was more challenging for the models under these conditions. The R2 values for the YOLO-YCbCr and YOLO-seg models were 0.77 and 0.87, respectively. Figure 11b shows the HRR prediction results for the four-wood-cribs fire experiment. The MAE values for the YOLO-YCbCr and YOLO-seg models were 147.98 kW and 132.25 kW, respectively. The MAE values remained high due to the proposed frame-wise HRR prediction approach, with a limited number of frames provided in the fire videos, which led to the capture of intermittent flames and, consequently, resulted in fluctuating HRR predictions. Both models accurately predicted the HRR values in real time, with the YOLO-YCbCr and YOLO-seg models achieving R2 values of 0.86 and 0.90, respectively.
Figure 12 presents the HRR prediction results from the corn oil pool fire experiments. As shown in Figure 12a, the MAE values for the YOLO-YCbCr and YOLO-seg models for the fire experiment involving corn oil in an aluminum pan were 4.10 kW and 5.14 kW, respectively; the R2 values were 0.82 and 0.70, respectively. Although both models effectively captured the overall trend of the experimental HRR curves, the HRR values were underpredicted. This underestimation could be attributed to the partial obstruction of the flame by the pan’s sidewalls, which hindered accurate flame height prediction and, consequently, resulted in a lower HRR prediction. Figure 12b shows the results of the fire experiment involving corn oil in a Calphalon® pan. The MAE values for the YOLO-YCbCr and YOLO-seg models were 5.39 kW and 5.71 kW, respectively. The R2 values for the YOLO-YCbCr and YOLO-seg models were 0.65 and 0.61, respectively. Both models effectively followed the HRR variations; however, high fluctuations in the HRR values were observed in the fully developed stage at around 30 s. This inaccuracy was attributed to the light reflected from the pan handle in the dark experimental environment, which was mistakenly detected as flames, thereby resulting in an overestimated flame size. Compared with the solid-fuel fire experiments, the prediction performance was lower because of the smaller number of original video frames and the presence of light reflections.
From the performance evaluation results, it was confirmed that the proposed method effectively predicted the HRR variations measured over time in the experiments, demonstrating the feasibility of image-based HRR prediction models. Despite differences in the flame segmentation characteristics of the YOLO-YCbCr and YOLO-seg models, their HRR prediction performances remained similar. The YOLO-YCbCr model, which has advantages such as relatively low learning cost, small model size, and fast inference time, is expected to have potential applications in fire-image-based real-time HRR prediction. However, the HRR values predicted by the two models fluctuated significantly compared to the experimental values. This could be attributed to the limitations of the NIST fire experiment videos, which provided frames at 5 s intervals for solid-fuel fire experiments and at 3 s intervals for liquid-fuel fire experiments. A limited number of fire image frames could contain continuous and intermittent flames, randomly causing large fluctuations in the HRR. Thus, such a low frame rate could make it difficult to apply the average flame height greater than 50% flame occurrence probability to the Zukoski correlation [38,39]. To confirm this, the HRR prediction was performed by applying the average flame height (AFH) definition specifically to the region near the peak HRR, where the NIST fire experiments provide a higher density of video frames (30 frames per second), and the magnitude of the HRR fluctuations and the prediction performance were evaluated. Figure 13 shows the prediction results of the HRR prediction models with the AFH definition for three representative fire videos: the one-box, one-wood-crib, and corn oil in aluminum pan fire experiments. All results showed that applying the AFH calculation reduced the variation in the predicted HRR values, and the reduction in the coefficient of variation ranged from 3.6% to 26.1%. Moreover, applying the AFH calculation enhanced the HRR prediction performance, and the reduction rates in the MAE ranged from 9.1% to 70.2%. Although the evaluation was conducted over a limited time period, it was verified that applying the AFH calculation improved the prediction performance of fire-image-based HRR prediction models.
Additional HRR prediction analysis using a propane pool fire experiment video from the NIST database was conducted to analyze the effect of applying the AFH. This fire video dataset features a three-step HRR increase, with each quasi-steady stage providing a relatively sufficient number of image frames. Figure 14 shows the HRR prediction results for the propane pool fire experiment. As shown in Figure 14a, the HRR values predicted by the YOLO-YCbCr and YOLO-seg based HRR prediction models were highly fluctuated, with MAE of 11.9 kW and 10.1 kW, respectively. In contrast, as shown in Figure 14b, the HRR values predicted by the models with the AFH closely followed the experimental results, and the HRR fluctuations were significantly reduced, resulting in an MAE of 4.7 kW for the YOLO-YCbCr model and an MAE of 4.0 kW for the YOLO-seg model. Thus, it was inferred that HRR prediction models incorporating the AFH calculation are more suitable for real-time HRR prediction in real fire videos that provide a sufficient number of frames. Future studies should focus on improving HRR prediction models and validating them using high-frame-rate fire videos to further improve their applicability. In addition, since the proposed approach lacks structural consideration of temporal dynamics, future research incorporating temporal modeling techniques will be conducted to better predict transient fire HRR behavior.
To quantitatively evaluate the real-time applicability of the proposed models, key indicators such as frames per second (FPS), number of model parameters, and floating-point operations per second (FLOPs) were measured. The proposed models were operated using a 13th Generation Intel® CoreTM i9-13900KS CPU@3.20 GHz (24 cores), NVDIA GeForce RTX 4090 GPU (24 GB VRAM), 128 GB DDR5 memory, and the Windows 11 operating system. The YOLO-YCbCr model achieved an inference speed of 107.9 FPS, while the YOLO-seg model recorded 100.2 FPS, indicating that the YOLO-YCbCr model is slightly faster. Both models are suitable for real-time HRR prediction, as the commonly accepted threshold for real-time performance is above 30 FPS [40]. In terms of model complexity, the YOLO-YCbCr model contains 3.0 million parameters, whereas the YOLO-seg model has 3.3 million. Furthermore, the computational cost, measured in floating-point operations, was lower for the YOLO-YCbCr model (8.2 GFLOPs) than for the YOLO-seg model (12.1 GFLOPs). These results demonstrate that the YOLO-YCbCr-based HRR prediction model provides more efficient operation for real-time HRR prediction.

3.4. Performance Comparison with Sequence Modeling-Based HRR Prediction Model

A quantitative performance comparison between our proposed HRR prediction model and a prior sequence modeling-based model was conducted. The selected prior model proposed by Wang et al. [29] utilizes a VGG16-based architecture trained on paired flame images and HRR data to predict real-time HRR through temporal sequence modeling. Two experimental videos for trash-can fire and wood-pallet fire experiments, originally provided by the NIST and also utilized in the prior study, were selected for performance comparison. Figure 15a shows the HRR prediction results of the trash-can fire experiments. The MAE values for the YOLO-YCbCr and YOLO-seg models were 25.0 kW and 20.7 kW, respectively, which are greater than that of the prior study, which had an MAE of 8.5 kW. This was attributed to fluctuations in the HRR values caused by frame-wise flame-segmentation-based HRR prediction and relatively poor segmentation performance during the peak HRR period. The proposed models exhibited good tracking of the experimental HRR variations during the growth and decay periods, but not for the fully developed period. Furthermore, the R2 values were 0.73 and 0.83 for the YOLO-YCbCr and YOLO-seg models, respectively, which are lower than that of the prior study, which had an R2 of 0.97. Figure 15b shows the HRR prediction results of the wood-pallet fire experiments. The MAE values for the YOLO-YCbCr and YOLO-seg models were 315.6 kW and 328.0 kW, respectively, which are greater than that of the prior study, which had an MAE of 225 kW due to the high fluctuations in the HRR values. However, both proposed models exhibited good tracking of the experimental HRR variations, and the R2 values were 0.80 and 0.78 for the YOLO-YCbCr and YOLO-seg models, respectively. The R2 values of the proposed models were greater than that of the prior study, which reported an R2 of 0.68 due to the underestimated HRR prediction of the prior model during the fully developed period. This was mainly because of fewer images from HRR paired datasets for large-HRR fires during training [29].
In summary, the proposed model can predict HRR values by extracting flame features. The model also benefits from simpler data requirements and lower training costs, as it is trained solely on flame images rather than large-scale HRR-labeled time-series data. Compared to the sequence-based model, which underperforms in out-of-distribution scenarios due to high data dependency, the proposed approach demonstrates more consistent HRR prediction even in complex or large-scale fire events. However, since the model does not account for temporal dynamics, the predicted HRR tends to exhibit significant fluctuations, and the real-time prediction performance is relatively lower than that of the sequence modeling-based HRR prediction model. Although applying the AFH calculation helps to reduce this fluctuation, further enhancements are needed to address this issue. Thus, future studies will incorporate temporal modeling techniques and enhance the AFH’s applicability to better predict transient fire HRR behavior.

4. Conclusions

A fire-image-based HRR prediction method was proposed using the YOLO-YCbCr flame segmentation model. For comparison purposes, the YOLO segmentation model was used as a representative deep learning-based segmentation model. The HRR values were predicted as a function of time for various fire experiment videos provided by the NIST, and the prediction performance was evaluated. The findings of this study are listed below:
  • A novel, lightweight, image-based HRR prediction model was proposed by combining deep learning and image processing to extract physically meaningful flame features.
  • The proposed fire-image-based HRR prediction models achieved R2 values ranging from 0.61 to 0.90, effectively capturing transient HRR trends. While frame-wise predictions caused fluctuations due to the limited video frames, applying the AFH significantly reduced these variations and improved the prediction performance.
  • The YOLO-YCbCr-based model demonstrated high efficiency and applicability for transient HRR prediction. Further improvements are needed to incorporate temporal dynamics and refine the AFH method to enhance HRR prediction performance under diverse fire conditions.
In future studies, further improvements in flame segmentation, average flame height applicability, and spatial information extraction will be conducted to enhance the performance of fire-image-based HRR prediction models. Additionally, the integration of sequence modeling techniques will be explored to more accurately predict transient fire HRR behavior. Furthermore, the applicability and robustness of the proposed approach will be evaluated through large-scale HRR measurement experiments involving various combustibles and fire sizes under realistic fire conditions.

Author Contributions

Conceptualization and methodology, J.R. and M.K.; formal analysis and investigation, J.R.; writing—original draft preparation, J.R.; writing—review and editing, M.K. and S.M.; supervision, M.K.; project administration, M.K. and S.M.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant, funded by the Ministry of Land, Infrastructure, and Transport (Grant RS-2022-00156237).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Barbrauskas, V.; Peacock, R.D. Heat release rate: The single most important variable in fire hazard. Fire Saf. J. 1992, 18, 255–272. [Google Scholar] [CrossRef]
  2. Quintiere, J.G. Principles of Fire Behavior, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  3. Hirschler, M.M. Use of heat release rate to predict whether individual furnishings would cause self-propagating fires. Fire Saf. J. 1999, 32, 273–296. [Google Scholar] [CrossRef]
  4. Babrauskas, V.; Grayson, S.J. Heat Release in Fires; Taylor & Francis: Oxford, UK, 1990. [Google Scholar]
  5. Johansson, N.; Svensson, S. Review of the use of fire dynamics theory in fire service activities. Fire Technol. 2019, 55, 81–103. [Google Scholar] [CrossRef]
  6. Ntzeremes, P.; Kirytopoulos, K. Evaluating the role of risk assessment for road tunnel fire safety: A comparative review within the EU. J. Traffic Transp. Eng. 2019, 6, 282–296. [Google Scholar] [CrossRef]
  7. Danzi, E.; Marmo, L.; Fiorentini, L. FLAME: A parametric fire risk assessment method supporting performance-based approaches. Fire Technol. 2021, 57, 721–765. [Google Scholar] [CrossRef]
  8. Babrauskas, V. Development of the cone calorimeter—A bench-scale heat release rate apparatus based on oxygen consumption. Fire Mater. 1984, 8, 81–95. [Google Scholar] [CrossRef]
  9. Parker, W.J. Calculations of the heat release rate by oxygen consumption for various applications. J. Fire Sci. 1984, 2, 380–395. [Google Scholar] [CrossRef]
  10. Thornton, W.M. XV. The relation of oxygen to the heat of combustion of organic compounds. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1917, 33, 196–203. [Google Scholar] [CrossRef]
  11. Mohd Tohir, M.Z.; Martín-Gómez, C. Evaluating fire severity in electric vehicles and internal combustion engine vehicles: A statistical approach to heat release rates. Fire Technol. 2025. [Google Scholar] [CrossRef]
  12. Yang, Y.; Zhang, G.; Zhu, G.; Yuan, D.; He, M. Prediction of fire source heat release rate based on machine learning method. Case Stud. Therm. Eng. 2024, 54, 104088. [Google Scholar] [CrossRef]
  13. Heskestad, G. Luminous heights of turbulent diffusion flame. Fire Saf. J. 1983, 5, 103–108. [Google Scholar] [CrossRef]
  14. Zukoski, E.E. Properties of Fire Plume. In SFPE Handbook of Fire Protection Engineering, 2nd ed.; National Fire Protection Association: Quincy, MA, USA, 1995. [Google Scholar]
  15. Ma, Q.; Chen, J.; Zhang, H. Heat release rate determination of pool fire at different pressure conditions. Fire Mater. 2018, 42, 620–626. [Google Scholar] [CrossRef]
  16. Huang, X.; Zhu, H.; Peng, L.; Zheng, Z.; Zeng, W.; Cheng, C.; Chow, W. An improved model for estimating heat release rate in horizontal cable tray fires in open space. J. Fire Sci. 2018, 36, 275–290. [Google Scholar] [CrossRef]
  17. Tan, Y.; Li, J.; Li, H.; Li, Z. Experimental study on fire temperature field of extra-long highway tunnel under the effect of auxiliary channel design parameters. Case Stud. Therm. Eng. 2025, 67, 105812. [Google Scholar] [CrossRef]
  18. Bochkov, V.S.; Kataeva, L.Y. wUUNet: Advanced Fully Convolutional Neural Network for Multiclass Fire Segmentation. Symmetry 2021, 13, 98. [Google Scholar] [CrossRef]
  19. Wu, X.; Zhang, X.; Jiang, Y.; Huang, X.; Huang, G.G.Q.; Usmani, A. An intelligent tunnel firefighting system and small-scale demonstration. Tunn. Undergr. Space Technol. 2022, 120, 104301. [Google Scholar] [CrossRef]
  20. Ghosh, R.; Kumar, A. A hybrid deep learning model by combining convolutional neural network and recurrent neural network to detect forest fire. Multimed. Tools Appl. 2022, 81, 38643–38660. [Google Scholar] [CrossRef]
  21. Jin, C.; Wang, T.; Alhusaini, N.; Zhao, S.; Liu, H.; Xu, K.; Zhang, H. Video fire detection methods based on deep learning: Datasets, methods, and future directions. Fire 2023, 6, 315. [Google Scholar] [CrossRef]
  22. Yan, C.; Wang, Q.; Zhao, Y.; Zhang, X. YOLOv5-CSF: An improved deep convolutional neural network for flame detection. Soft Comput. 2023, 27, 19013–19023. [Google Scholar] [CrossRef]
  23. Li, Y.; Zhang, W.; Liu, Y.; Jing, R.; Liu, C. An efficient fire and smoke detection algorithm based on an end-to-end structured network. Eng. Appl. Artif. Intell. 2022, 116, 105492. [Google Scholar] [CrossRef]
  24. Song, K.; Choi, H.; Kang, M. Squeezed fire binary segmentation model using convolutional neural network for outdoor images on embedded device. Mach. Vis. Appl. 2021, 32, 120. [Google Scholar] [CrossRef]
  25. Choi, H.; Jeon, M.; Song, K.; Kang, M. Semantic fire segmentation model based on convolutional neural network for outdoor image. Fire Technol. 2021, 57, 3005–3019. [Google Scholar] [CrossRef]
  26. Carmignani, L. Flame Tracker: An image analysis program to measure flame characteristics. SoftwareX 2021, 15, 100791. [Google Scholar] [CrossRef]
  27. Wang, Z.; Zhang, T.; Wu, X.; Huang, X. Predicting transient building fire based on external smoke images and deep learning. J. Build. Eng. 2022, 47, 103823. [Google Scholar] [CrossRef]
  28. Hu, L.; Lin, X. Research on the prediction method of tunnel fire heat release rate based on informer network. In Proceedings of the 3rd International Conference on Green Building, Civil Engineering and Smart City (GBCESC 2024), Kunming, China, 22–25 July 2024; pp. 856–867. [Google Scholar] [CrossRef]
  29. Wang, Z.; Zhang, T.; Huang, X. Predicting real-time fire heat release rate by flame images and deep learning. Proc. Combust. Inst. 2023, 39, 4115–4123. [Google Scholar] [CrossRef]
  30. Xu, L.; Dong, J.; Zou, D. Predict future transient fire heat release rates based on fire imagery and deep learning. Fire 2024, 7, 200. [Google Scholar] [CrossRef]
  31. Prasad, K. Predicting Heat Release Rate from Fire Video Data Part 1. Application of Deep Learning Techniques; Series (NIST IR 8521) Publication ID; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2024. [CrossRef]
  32. Roh, J.; Min, S.; Kong, M. Flame segmentation characteristics of YCbCr color model using object detection technique. Fire Sci. Eng. 2023, 37, 54–61. [Google Scholar] [CrossRef]
  33. Ultralytics. YOLOv8 Segmentation. Available online: https://yolov8.org/yolov8-segmentation (accessed on 4 August 2024).
  34. Lee, H.; Kim, W. The flame color analysis of color models for fire detection. J. Satell. Inf. Commun. 2013, 8, 52–57. [Google Scholar]
  35. Celik, T.; Demirel, H. Fire detection in video sequences using a generic color model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  36. Bai, R.; Wang, M.; Zhang, Z.; Lu, J.; Shen, F. Automated construction site monitoring based on improved YOLOv8-seg instance segmentation algorithm. IEEE Access 2023, 11, 139082–139096. [Google Scholar] [CrossRef]
  37. NIST. Fire Calorimetry Database (FCD). Available online: https://nist.gov/el/fcd (accessed on 1 September 2024).
  38. Guo, Y.; Xiao, G.; Chen, J.; Wang, L.; Deng, H.; Liu, X.; Sun, Q.; Xiong, X. Investigation of ambient temperature effects on the characteristics of turbulent diffusion flames: An experimental approach. Process Saf. Environ. Prot. 2023, 175, 88–98. [Google Scholar] [CrossRef]
  39. Song, X.; Wang, Z.; Ge, S.; Li, W.; Lu, J.; An, W. Study on flame height and temperature distribution of double-deck bridge fire based on large-scale fire experiments. Therm. Sci. Eng. Prog. 2024, 47, 102319. [Google Scholar] [CrossRef]
  40. Ramesh, B.; George, A.D.; Lam, H. Real-time, low-latency image processing with high throughout on a multi-core SoC. In Proceedings of the 2016 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 13–15 September 2016; pp. 1–7. [Google Scholar] [CrossRef]
Figure 1. Architecture of YOLO-YCbCr.
Figure 1. Architecture of YOLO-YCbCr.
Fire 08 00283 g001
Figure 2. Procedure of image-based fire heat release rate estimation.
Figure 2. Procedure of image-based fire heat release rate estimation.
Fire 08 00283 g002
Figure 3. Architecture of YOLOv8 segmentation [36].
Figure 3. Architecture of YOLOv8 segmentation [36].
Fire 08 00283 g003
Figure 4. Photographs for fire experiments with various combustibles: (a) one box, (b) four boxes, (c) one wood crib, (d) four wood cribs, (e) corn oil in Calphalon® pan, and (f) corn oil in aluminum pan [37].
Figure 4. Photographs for fire experiments with various combustibles: (a) one box, (b) four boxes, (c) one wood crib, (d) four wood cribs, (e) corn oil in Calphalon® pan, and (f) corn oil in aluminum pan [37].
Fire 08 00283 g004
Figure 5. Representative images for the (a) training image dataset and (b) annotations.
Figure 5. Representative images for the (a) training image dataset and (b) annotations.
Fire 08 00283 g005
Figure 6. Flame segmentation results: (a) forest fire, (b) building fire, (c) outdoor fire, and (d) indoor fire.
Figure 6. Flame segmentation results: (a) forest fire, (b) building fire, (c) outdoor fire, and (d) indoor fire.
Fire 08 00283 g006
Figure 7. One-box fire experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Figure 7. One-box fire experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Fire 08 00283 g007
Figure 8. One-wood-crib fire experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Figure 8. One-wood-crib fire experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Fire 08 00283 g008
Figure 9. Coil oil pool fire in Calphalon® pan experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Figure 9. Coil oil pool fire in Calphalon® pan experiment flame segmentation results: (a) original images, (b) YOLO-YCbCr model, and (c) YOLO-seg model.
Fire 08 00283 g009
Figure 10. HRR prediction results for the box fire experiments: (a) one box and (b) four boxes.
Figure 10. HRR prediction results for the box fire experiments: (a) one box and (b) four boxes.
Fire 08 00283 g010
Figure 11. HRR prediction results for the wood-crib fire experiments: (a) one wood crib and (b) four wood cribs.
Figure 11. HRR prediction results for the wood-crib fire experiments: (a) one wood crib and (b) four wood cribs.
Fire 08 00283 g011
Figure 12. HRR prediction results for the corn oil pool fire experiments: (a) aluminum pan and (b) Calphalon® pan.
Figure 12. HRR prediction results for the corn oil pool fire experiments: (a) aluminum pan and (b) Calphalon® pan.
Fire 08 00283 g012
Figure 13. HRR prediction results during the peak HRR period: (a) one box, (b) one wood crib, and (c) corn oil in aluminum pan.
Figure 13. HRR prediction results during the peak HRR period: (a) one box, (b) one wood crib, and (c) corn oil in aluminum pan.
Fire 08 00283 g013
Figure 14. HRR prediction results of the proposed models: (a) without the AFH and (b) with the AFH.
Figure 14. HRR prediction results of the proposed models: (a) without the AFH and (b) with the AFH.
Fire 08 00283 g014
Figure 15. HRR prediction results: (a) trash-can fire experiment and (b) wood-pallet fire experiment [29].
Figure 15. HRR prediction results: (a) trash-can fire experiment and (b) wood-pallet fire experiment [29].
Fire 08 00283 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roh, J.; Min, S.; Kong, M. Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods. Fire 2025, 8, 283. https://doi.org/10.3390/fire8070283

AMA Style

Roh J, Min S, Kong M. Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods. Fire. 2025; 8(7):283. https://doi.org/10.3390/fire8070283

Chicago/Turabian Style

Roh, Joohyung, Sehong Min, and Minsuk Kong. 2025. "Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods" Fire 8, no. 7: 283. https://doi.org/10.3390/fire8070283

APA Style

Roh, J., Min, S., & Kong, M. (2025). Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods. Fire, 8(7), 283. https://doi.org/10.3390/fire8070283

Article Metrics

Back to TopTop