Next Article in Journal
Enhancing Fire Safety Knowledge among Underwater Road Tunnel Users: A Survey in China
Previous Article in Journal
Drivers of Pinus halepensis Plant Community Structure across a Post-Fire Chronosequence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection

by
Farkhod Akhmedov
1,
Rashid Nasimov
2 and
Akmalbek Abdusalomov
1,*
1
Department of Computer Engineering, Gachon University Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
2
Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
*
Author to whom correspondence should be addressed.
Fire 2024, 7(9), 332; https://doi.org/10.3390/fire7090332
Submission received: 17 July 2024 / Revised: 25 August 2024 / Accepted: 19 September 2024 / Published: 23 September 2024
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)

Abstract

:
Ship fire detection presents significant challenges in computer vision-based approaches due to factors such as the considerable distances from which ships must be detected and the unique conditions of the maritime environment. The presence of water vapor and high humidity further complicates the detection and classification tasks for deep learning models, as these factors can obscure visual clarity and introduce noise into the data. In this research, we explain the development of a custom ship fire dataset, a YOLO (You Only Look Once)-v10 model with a fine-tuning combination of dehazing algorithms. Our approach integrates the power of deep learning with sophisticated image processing to deliver comprehensive solutions for ship fire detection. The results demonstrate the efficacy of using YOLO-v10 in conjunction with a dehazing algorithm, highlighting significant improvements in detection accuracy and reliability. Experimental results show that the YOLO-v10-based developed ship fire detection model outperforms several YOLO and other detection models in precision (97.7%), recall (98%), and [email protected] score (89.7%) achievements. However, the model reached a relatively lower score in terms of F1 score in comparison with YOLO-v8 and ship-fire-net model performances. In addition, the dehazing approach significantly improves the model’s detection performance in a haze environment.

1. Introduction

Maritime transportation is a critical component of global trade and logistics, with millions of ships navigating the world’s oceans and waterways annually. Despite the robustness of modern maritime operations, fires onboard ships remain a significant threat to vessel safety, cargo integrity, and human lives. Ship fires can originate from various sources, including engine malfunctions, electrical faults, flammable cargo, and human error. For instance, it is important to note that fires on ships continue to be one of the most severe safety issues in the ocean environment, even if shipping losses have significantly decreased by 50% over the past ten years. In 2022, a significant number of fire occurrences were recorded, totaling 200 incidents, which represents the highest annual figure in the past ten years. Notably, among these incidents, 43 were specifically linked to ships carrying cargo or containers, highlighting an upward vulnerability within this particular segment of the maritime business sector. According to the report of the World Shipping Council (WSC) [1], there has been a troubling increase in fires aboard containerships, many of which have led to fatalities and total losses. Statistical notes estimate that every 60 days, a serious ship fire occurs. The financial and human toll of ship fires is very big. Allianz Global Corporate & Specialty’s (AGCS) Safety and Shipping Review [2] reports that fire and explosion incidents were the third most common cause of loss for the global shipping industry in the years 2015 and 2019, accounting for approximately 7% of total losses. In terms of expenses, the average cost of a ship fire can range from tens of thousands to millions of dollars, depending on the severity and the value of the cargo, which is a big financial loss every 60 days. In 2018, the total cost of losses from ship fires was estimated at over USD 1 billion. Furthermore, ship fires have caused numerous fatalities and injuries, highlighting the urgent need for effective detection and prevention measures. There are some possible solutions to avoid ship fires, such as strict safety regulations, regular maintenance, crew training, and application of advanced fire suppression systems related to onboard and outside-the-board situations. Several researchers [3,4] actively studied image-based fire detection approaches using artificial intelligence. Convolutional neural networks (CNNs) are incorporated into the domain of image-based fire detection, leading to the advancement of self-learning algorithms that autonomously extract and analyze fire-related features from images [5,6]. This integration of CNNs has revolutionized the traditional methods of fire detection by enabling the systems to automatically learn complex patterns and characteristics associated with fire, such as flame color, shape, and dynamic behavior, without relying on manually crafted features. The use of CNNs in this context enhances the accuracy and efficiency of fire detection, particularly in diverse and challenging environments. Building on these advancements, this research proposes the application of computer vision (CV) algorithms for the detection of fires aboard ships. In particular, the proposed method aims to improve the detection of ship fires by addressing the critical need for reliable and efficient detection in the maritime domain. These algorithms apply their capacity to examine video streams not only for onboard camera inputs but also for far distances as well to train to identify particular patterns linked to ship fires. This enables them to quickly and accurately detect the presence of smoke in various compartments of the ship. The main idea is early detection of fire as it starts. Early detection is crucial in fire prevention and mitigation. By identifying smoke at its initial stages, before the situation becomes even worse and there is a full-fledged fire, these CV systems can notify the crew or initiate automated responses. This early intervention can make a critical difference in preventing extensive financial and physical damage or potential loss of life.
Furthermore, in terms of CV applications, integrating smoke detection algorithms with other fire prevention measures can help to accomplish more comprehensive safety systems. For instance, these algorithms can work in conjunction with temperature sensors and gas detectors to provide a multi-faceted approach to fire detection. This synergy enhances the reliability and effectiveness of the overall fire prevention strategy, ensuring that even the smallest signs of a potential fire are promptly addressed.
In addition to detection, CV algorithms can also be used for continuous monitoring of fire-prone areas. By constantly analyzing the video feeds, the system can provide real-time updates on the status of these areas, allowing for rapid and clear responses to any changes. This continuous monitoring is particularly important in high-risk zones such as engine rooms, cargo holds, and kitchens, where fires are more likely to occur. Similarly, regarding the benefits of real-time video analysis, outside-the-board ship analysis is also crucial, for example, in port fires, where a fire originating on one ship spreads to adjacent vessels and infrastructures. This can pose a significant risk to maritime operations and coastal safety. For example, in 2020, a fire at the port of Savannah spread from one container to the others [7]. This incident prompted a review of fire safety protocols and the adoption of improved fire detection systems. Overall, early detection, continuous monitoring, and coordinated response capabilities play a vital role in preventing fires and mitigating their impact. This integration of advanced technology into traditional safety measures is essential for ensuring the well-being of the crew and the protection of valuable assets at sea.
Recently, applications of the DL algorithms have been significantly integrated with visual data analysis, such as in the field of fire identification. The immediate integration of these algorithms with CV, image processing technologies, hardware computing capabilities, and the proliferation of video surveillance networks have catalyzed a significant shift towards more sophisticated fire detection technologies. CNNs, in particular, have shown exceptional progress in extracting intricate image features. The development of video surveillance systems and the growing intelligence with the automation of contemporary ships present a great opportunity to combine monitoring and DL technologies for fire detection.
In this paper, we aim to contribute to the use of the YOLO algorithm with the integration of a dehazing technique for ship fire detection. YOLO’s ability to perform, in real time, object detection with high accuracy and speed makes it an ideal candidate for this application. Implementing the YOLO algorithm can be beneficial by ensuring swift and accurate identification of fire incidents, thereby improving overall safety and response efficiency. To improve visual clarity, we contribute to combining the ship fire detection process with dehazing algorithms. Dehazing algorithms are essential in improving image clarity and visibility in the presence of atmospheric haze, fog, and water vapor. These algorithms are useful in maritime environments where visibility can be significantly reduced due to such conditions. We developed our ship fire detection model by fine-tuning the YOLO-v10m [8] model, which is the strongest model introduced by the researchers from Tsinghua University in terms of its previous releases. A model description will be mentioned in Section 3.
This study makes three important contributions to marine safety by effectively detecting ship fires:
  • First, we created a massive dataset for ship fire detection.
  • Second, we constructed a SOTA YOLO-v10-based ship fire detection model.
  • Third, we integrated dehazing algorithm-based optimization for image ship fire image classification.
This research provides a detailed account of the dataset preparation, model training, and evaluation processes. This dataset is divided into two classes: the “Fire” and “No-fire” classes. Fine-tuning the YOLO-v10 model with a custom dataset and model training involved extensive training and validation to ensure the model’s robustness and accuracy in diverse marine environments. We further enhanced the detection capabilities of our model by incorporating advanced image processing techniques. These optimizations mainly focused on improving the model’s performance by avoiding noisy input data, such as different lighting scenarios and varying wave sun reflected inputs, ensuring high and reliable detection accuracy in real-world applications.
The remainder of this paper is structured as follows:
In Section 2, this paper describes the related research works, focusing on various methods, datasets, and algorithms previously employed for detecting fire and ship fire images in marine environments. Specifically, we will examine the strengths and limitations of existing approaches, as well as the advancements in technology and methodologies related to our study. In Section 3, we describe the contributions of this work, which includes a detailed review of our data collection process. We also describe data processing, augmentation, model training, and dehazing model implementations in detail. Section 4 highlights the experimental results and analysis of our study. We conduct a comparative analysis of our proposed method with other SOTA (state-of-the-art) models to evaluate the performance of our models. In this section, we can see how our proposed method is outperforming. In addition, this section includes quantitative metrics and visual examples to demonstrate the robustness of our approach. Finally, in Section 5, we conclude our research findings, key contributions, and potential directions for further improvements and expansions of ship fire detection systems.

2. Related Work

As we know, object detection and recognition algorithms mostly rely on particular types of deep neural networks (DNNs) and CNNs. These neural networks consist of multiple layers, and each one serves a distinct functionality, such as in marine safety, where it would be used for ocean environment analysis, feature extractions, data identification, image enhancement, dehazing, and anomaly detection to achieve accurate and precise object detection rates. Previously, traditional fire detection methods faced challenges related to speed, accuracy, and performance degradation. Conventionally, fire detection relied on identifying and extracting both dynamic and static flame attributes, including color features and shapes, and then employing ML algorithms to recognize these features. Notable progress in this area includes the early warning of fire cases, a mechanism that was presented by Chen et al. [9]. This approach utilizes video analysis to detect fire together with smoke pixels based on chromatic and disorder measurements within the RGB model. A common fire detection system on a ship integrates sensors for fire, smoke, and heat-related factors, which are all connected to an alarm panel. These systems help to provide visual and audible alerts, indicating the precise location of a fire on the vessel. Flame texture feature extraction is a prominent technique utilized in fire detection and identification [10,11,12,13,14]. For instance, Cui et al. [15] developed a method to analyze the texture of fire smoke by integrating two advanced texture analysis tools, such as wavelet analysis and gray-level co-occurrence matrices (GLCMs). This method allowed for the effective extraction of texture features specific to fire smoke. Similarly, Ye et al. [16] represented an innovative dynamic texture descriptor by introducing a surface transform and a Hidden Markov Tree (HMT) model. Another researcher [17] introduced a real-time fire smoke detection approach based on GLCMs, which focused on classifying texture features to distinguish smoke and non-smoke textures efficiently. Chino et al. [18] developed a novel method for fire detection in static images. Their approach combined color feature classification with texture classification within superpixel regions, enhancing the accuracy of fire detection. With these texture analysis techniques, researchers have significantly improved the accuracy and reliability of fire detection systems.
Recent implementations in fire detection methodologies are exemplified by Foggia et al. [19], who proposed a method of analyzing surveillance camera videos by integrating color, shape, and motion information through multiple expert systems. Similarly, Premal et al. [20] conducted research on forest fire detection with color model. Furthermore, Wu et al. [21] introduced a dynamic fire detection algorithm for surveillance videos that incorporates radiation domain feature models. In the realm of fire detection through image segmentation, the main task involves categorizing individual pixels in an image into distinct categories, such as fire regions or backgrounds. This task is typically addressed using semantic segmentation networks, which are trained end-to-end to generate segmentation masks directly from the original image. Frameworks like U-Net [22] are commonly employed for this purpose. U-Net, originally designed for biomedical image segmentation, has proven effective in segmenting and classifying fire-related elements at the pixel level. An instance segmentation-based approach not only helps to classify pixels into specific categories but also distinguishes individual instances of those categories. For example, Guan et al. [23] proposed a forest fire segmentation method that is specifically designed for the early detection and segmentation of forest fires, demonstrating the application of instance segmentation in fire detection.
Research addressing critical issues in CV related to fire detection using UAV-captured video frames from the FLAME dataset has proposed innovative solutions for binary image classification (fire vs. no fire) and fire instance segmentation. To effectively detect fire smoke regions inside photos, segmentation methods perform well. They leverage global information and the U-Net network, which is developed to accurately identify fire smoke regions within images. Combining global contextual information with the U-Net architectures, models can capture meaningful details and spatial relationships which is important for effective segmentation. Zheng et al. [24] introduced a sophisticated approach to the semantic segmentation of fire smoke, integrating Multi-Scale Residual Group Attention (MRGA) with the U-Net architecture. This method adeptly captures multi-scale smoke features, enhancing the model’s ability to discern subtle nuances in small-scale smoke instances. The combination of MRGA and the U-Net framework significantly improves the model’s perceptual acuity, particularly in detecting and segmenting small-scale smoke regions.
Over the past decade, there has been a significant transformation in fire detection technology. Specifically, the YOLO algorithms have emerged as powerful tools for object detection, addressing many previously existing challenges. The initial evolution starting from YOLO-v1 to the more advanced YOLO-v10 algorithms underscores substantial innovations that have significantly improved their detection capabilities. This development marks a broader paradigm shift in fire detection technologies towards DL.

Color Attributes for Object Detection

Object detection remains one of the most challenging tasks in computer vision due to the substantial variability observed among images within the same object category. This variability is influenced by numerous factors, including differences in perspective, scale, and occlusion, which complicate the accurate identification and classification of objects. SOTA methodologies for object detection predominantly rely on intensity-based features, often excluding color information. This exclusion is primarily due to the significant variation in color that can arise from changes in illumination, compression artifacts, shadows, and highlights. Such variations introduce considerable complexity in achieving robust color descriptions, thereby posing additional challenges to the object detection process.
Within the domain of image classification, integrating color information with shape features has demonstrated remarkable effectiveness. Research indicates that combining these features can substantially enhance classification performance, as color provides additional information that aids in distinguishing objects with similar shapes but different colors [25,26,27,28,29,30,31]. A parallel concept in CV, often utilized in object detection is the segmentation of the target object. This technique involves classifying each pixel in an image into a specific category, thereby enabling precise identification of various elements such as ships, smoke, fire, clouds, and sea with ship-surrounded objects. Semantic segmentation offers a holistic understanding of the entire image, significantly improving data interpretation and analysis capabilities. This approach is particularly advantageous for processing remote sensing images, which we will consider for further research in our next works. According to Chen et al. [32], for remote sensing purposes, this capability is important for environmental monitoring where precise mapping of features is essential. However, due to the close similarity of fire and smoke color intensity, we decided to train the model by including three main objects, such as ship, fire, and smoke.
The current SOTA techniques in object recognition rely on exhaustive search strategies, which are computationally intensive and often inefficient. Selective search algorithms focus on generating a smaller set of high-quality region proposals, reducing the computational burden while maintaining or improving detection accuracy. These methods apply a hierarchical grouping of similar regions based on color, texture, size, and shape compatibility to generate fewer and more relevant proposals, leading to more efficient and effective object detection. Advancements in DL, especially CNNs, have significantly improved the field of object detection. Networks, such as Faster R-CNN, YOLO, and SSD have set new benchmarks in detection accuracy and speed by leveraging end-to-end training pipelines and innovative architectural designs. Integration of these advanced techniques and methodologies reflects progress in applications of remote sensing, environmental monitoring, and disaster response, where precise and reliable detection is critical. Color remains a crucial feature in the accurate classification of fire pixels and is employed in nearly all detection methods. Traditionally, various color spaces such as RGB (red, green, and blue), HSV (hue, saturation, and value), HSI (hue, saturation, and intensity), and YCbCr (luminance, chrominance blue, and chrominance red) are meticulously used for fire detection [33]. The RGB color space is widely used due to its simple representation of images, where fire pixels typically exhibit high red and green values, distinguishing them from other objects. However, RGB can be sensitive to illumination changes, which can affect detection accuracy. HSV and HSI color spaces, which separate chromatic content from intensity information, provide a more robust approach for detecting fire. These spaces reduce the impact of lighting variations, making it easier to distinguish fire regions based on hue and saturation characteristics. For example, fire pixels usually exhibit high hue values and moderate saturation levels. YCbCr, another commonly used color space, separates luminance from chrominance components to enhance the ability to detect fire in complex environments. This separation allows for more effective identification of fire regions based on chrominance values while minimizing the effects of lighting and shadow variations in color-changing environments. In addition to traditional color spaces, modern fire detection methods increasingly incorporate advanced techniques such as ML and DL to enhance color-based fire detection. By leveraging CNNs, these approaches can automatically learn and extract relevant color features from large datasets, improving the robustness and accuracy of fire detection systems. Moreover, multi-spectral imaging and infrared (IR) sensors are being integrated into fire detection systems to complement visible color information. Multi-spectral imaging captures data across various wavelengths, providing additional features for fire detection, while IR sensors detect thermal signatures, which are indicative of fire even in low-visibility conditions. The combination of these technologies with traditional color spaces can offer a comprehensive approach to fire detection, ensuring higher reliability and accuracy. For example, the application of YCbCr color space as a generic model is applicable in various rules to detect accurately. Celik et al. [34] used YCbCr color spaces to separate luminance from chrominance and obtained a perfect output. Similarly, Vipin et al. [35] proposed an algorithm where they used RGB and YCbCr color spaces to test two sets of images. Khalil et al. [36] presented a novel method to detect fire based on combinations of RGB and CIE L*a*b color models by combining motion detection with on-fire objects.

3. Proposed Methods and Model Architecture

This study focused on detecting hazy ship images and improved fine-tuned ship fire detection models by applying a hazy image-enhancing algorithm. To achieve this, we fine-tuned the latest and best Yolo algorithm, such as Yolo-v10. It is noted that Yolo-v10 is outperforming other versions and other SOTA object detection models, especially in real-time applications. There are other similar approaches that are also focused on detecting ship fires; however, our trained model shows higher detection accuracy. Moreover, there is no model that was fine-tuned and proposed to detect hazy ship fires with Yolo-v10.

3.1. YOLO Architecture

YOLO-v1, proposed in 2016, marked the first advancement in real-time object detection using a CNN architecture comprising 24 layers. The YOLO-v1 architecture inputs images in fixed size (e.g., 448 × 448 pixels). The input image is then divided into an A × A grid, where each grid cell is responsible for predicting objects that fall within it. The functionality of each grid cell is then to estimate the B bounding box, including confidence scores for those exact boxes. The final output is a tensor of dimensions (A, A, B × 5 + C), where B is the number of bounding boxes predicted per grid cell, 5 corresponds to the bounding box coordinates, and C is labeled as the number of prediction classes. YOLO-v1 employed stochastic gradient descent as its optimizer, alongside specific loss functions for localization and classification. The loss function was developed to penalize errors in both localizing and classifying tasks. The λ_coord and λ_noobj are set to five to regulate the coefficients and magnitude of various sections of localized objects as shown in the equation below.
λ c o o r d i = 1 A 2 j = 1 B i j o b j [ ( x i x i ) 2 + y i y i ) 2 + λ c o o r d i = 0 A 2 j = 0 B i j o b j   [ w i w i 2 + ( h i h i ) 2 ] +   i = 0 A 2 j = 0 B i j o b j ( C i C i ) 2   + λ n o o r d i = 0 A 2 j = 0 B i j n o o b j ( C i C i ) 2 + i = 0 A 2 i o b j c c l a s s e s ( p i c p i ( c ) ) 2
Here, i j o b j denoted objects that appeared in cell i, and i j o b j denoted the jth bounding box in cell i that was set for the prediction.
For instance, Shen et al. [37] utilized the YOLO-v1 model to detect flames, yet there remains substantial potential for enhancing this approach. Researchers, Qian et al. [38] introduced channel-wise pruning technology to optimize YOLO-v3, effectively reducing the number of parameters and making it more suitable for fire monitoring systems. Wang et al. [39] developed Linght-YOLOv4, a lightweight detector designed to balance performance and efficiency, demonstrating strong detection capabilities and speed in embedded scenarios. Wu et al. [40] enhanced YOLO-v5 by improving the Spatial Pyramid Pooling (SPP) module and the activation function, thereby increasing the robustness and reliability of fire detection. Xue et al. [41] incorporated the Convolutional Block Attention Module (CBAM) and Bidirectional Feature Pyramid Network (BiFPN) into YOLO-v5, which significantly improved the detection of small targets in forest fire scenarios. YOLO-based fire detection algorithms have been widely proposed in fire detection issues [42,43,44,45,46,47,48]. Wang et al. [49] proposed a video-based method for detecting flames and smoke on ships, addressing the limitation of traditional fire detection equipment. This method highlights the potential for integrating advanced DL techniques with maritime fire detection systems to enhance early warning capabilities and overall safety.
The objective of this study is to effectively detect fire on ships by training a model. For that, we created two class-included datasets, fine-tuned the YOLO-v10 model, and combined the developed model with a dehazing algorithm. Therefore, the developed model detects ships, smoke, and fire and mainly focuses on detecting ships that are on fire or with smoke without fire. We created a custom dataset for various sea transports for those that are burning, with smoke, vs. those that are not burning, with no fire effects and smoke. Variations in item sizes and aspect ratios, as well as inference speed and noise occurrences, make real-time object detection very difficult. Specifically, in the maritime environment, excessive humidity levels can cause haziness and decrease atmospheric visibility. Objects in real-world scenarios often exhibit diverse aspect ratios and weather conditions, such as the fact that they can be elongated or compressed with rain, snow, wind, waves, etc., which diminishes object detection model performance. Basically, object detection algorithms primarily rely on well-defined features and patterns. Traditional detection methods often struggle under these conditions, necessitating more advanced approaches. To address these issues, we developed a custom dataset that includes a wide range of ship images taken under different environmental conditions. This dataset is essential for training and fine-tuning the YOLO-v10 model, ensuring that it can accurately detect fires on ships despite the presence of noise and visual distortions. YOLO-v10 was selected for its superior detection capabilities. This model builds upon the strengths of its predecessors, introducing enhancements that improve its precision and speed. Fine-tuning the YOLO-v10 without a dataset enables it to learn the specific features and patterns with ship fires. Further, we will describe YOLO-v10 architecture and its significant advancement in real-time object detection. A critical component of our methodology is the application of a dehazing algorithm. Dehazing is a preprocessing step that aims to remove the effects of haze caused by water vapor and other atmospheric particles. By enhancing image clarity, dehazing allows the YOLO-v10 model to focus on the relevant features of the image, such as smoke and flames. By dehazing, we aim to reduce the noise in the image and then input the image into the pipeline of fine-tuned YOLO-v10 for further processing. In the sections below, as the main contributions of our study, we will describe the dataset, model training, and dehazing in more detail. YOLO-v10 architecture improved its performance by non-maximum suppression (NMS) elimination.
m = s × p a × I o U ( b , b ) β
In consistent matching metric (2), one-to-one and one-to-many approaches leverage a metric to quantitatively assess the level of concordance between predictions and instances [8]. The employment of dual label assignment strategies during training assisted in enriching supervision and efficiency in end-to-end deployment. According to the conclusion of the YOLO-v10 researcher, this model achieves SOTA performance, latency, and superiority compared with other YOLO algorithms and advanced detectors. The architecture of YOLO-v10 (Figure 1 and Figure 2) advances upon its predecessors by reducing several significant innovations. For example, the backbone enhanced the Cross Stage Partial Network (CSPNet) that serves as the backbone by improving gradient flow and reducing computational redundancy during feature extraction. The neck integrates features across different scales using Path Aggregation Network (PAN) layers for effective multi-scale feature fusion. During the training process, the one-to-many head generates multiple predictions per object, which enhances the learning process through richer supervisory signals. During inference, the one-to-one head produces a single, optimal prediction per object by eliminating the need for non-maximum suppression (NMS), thereby reducing latency. From Figure 2, YOLO-v10 also incorporates large kernel convolutions and partial self-attention modules, enhancing performance without a significant increase in computational cost.
Traditional YOLO models often use a uniform block structure across all states, which can lead to inefficiencies and bottlenecks. To address this, YOLO-v10 introduced a rank-guided block design as shown in Figure 2. This design calculates the intrinsic rank of each stage, especially the last convolution in the final basic block. A Compact Inverted Block (CIB) structure utilizes depth-wise convolutions for spatial processing and pointwise convolutions for channel processing. This approach addresses the inefficiencies found in prior YOLO versions, resulting in improved performance without unnecessary computational complexity.

3.2. Dataset Collection

In maritime safety and emergency response, the prompt and precise detection of ship fires is essential for preventing catastrophic incidents. By harnessing the latest advancements in CV and DL, this approach delineates a comprehensive methodology to develop a model that explicitly focuses on ship fire detection challenges. Initially, we started with data collection. Figure 3 represents two class ship data collection instances from internet sources.
The success of ML models, particularly in CV tasks, is significantly influenced by the quality and quantity of the training data. It is well established that larger datasets generally lead to enhanced performance of DL models. However, a prevalent issue with smaller datasets in CV is that the models trained on them often struggle to generalize effectively to data from validation and test sets. Extensive research has demonstrated the effectiveness of data augmentation techniques in leveraging well-known academic image datasets to improve model performance [50,51], and several advanced techniques have been developed to address the limitations of smaller dataset applications to develop the DL model [52,53,54] because there is a common issue related to small datasets in CV in which trained models struggle to generalize the data from validation and test sets [55].
We also employed data augmentation techniques on our dataset to enhance its diversity and improve model robustness. Specifically, we utilized methods such as rotation and random cropping, as illustrated in Figure 4, because, from the Table 1 analysis, it can be concluded that the implementation of data augmentation to CIFAR 10, 100, and SVHN datasets has increased image classification accuracy in the application of DenseNet, Wide-ResNet, and Shake-ResNet models.
Table 2 provides a breakdown of the data distribution used for training and validating the ship fire detection model. The dataset is divided into two main classes, such as “Fire” and “Non-Fire”. The data distribution for each category is allocated based on an 80-20 split for training and validation purposes. For the “Fire” class, images depict fire incidents on ships. The total number of images in this category is 9235. For model training, 80% of these images, amounting to 7388 images, were used. The remaining 20%, equivalent to 1847 images, were used for validation. This distribution allows for the model to learn a wide variety of fire scenarios across different conditions. For the “Non-Fire” category, images do not depict any fire, representing normal and safe ship images, with only haziness excluded. The total number of images in this class is 3372. Similar to the “Fire” class, an 80-20 split ratio is applied, resulting in 2698 images for training and 674 images reserved for validation.

3.3. Dehazing Integration

Haze, fog, and smoke are atmospheric phenomena resulting from absorption and scattering with attenuation of the irradiance received by the camera from the scene point along with the line of sight. This attenuation leads to reduced visibility and color distortion in captured images [56]. The overall integration of dehazing techniques in CV systems is implemented to overcome the challenges posed by atmospheric conditions, thereby enhancing the accuracy and reliability of visual information. There have been many haze removal methods and techniques proposed, such as polarization [57,58] and depth-based [59,60] and contrast-based restoration [61]. A dehazing algorithm is applied to recover the true scene radiance from an observed hazy ocean ship image. The presence of haze in an image, in our case, is modeled by the following image formation model:
I ( x ) = J ( x ) × t ( x ) + A ( 1 t ( x ) )
Here, we aim to estimate J ( x ) , given I ( x ) , t ( x ) , and A , meaning the following:
-
I ( x ) , is the observed hazy image,
-
J ( x ) , is the scene radiance,
-
t ( x ) is the transmission map, indicating the portion of light that reaches the camera without being scattered,
-
A is the global atmospheric light, representing the ambient light scattered by the atmosphere.
As mentioned above, the primary objective of the dehazing algorithm is to recover the true scene radiance, a clear image by removing the effects of haze from the observed image. This is particularly important for the ship fire detection approach, where maritime environments represent water vapor, haze, and high humidity. The goal of the dehazing algorithm is to estimate J(x) given the observed image I(x), the transmission map t(x), and the global atmospheric light A. By accurately estimating these components, the algorithm effectively removes the haze, revealing the true details of the scene, thereby enabling the ship fire model to detect images more accurately. A graphical explanation of this process is highlighted in Figure 5.
These methods rely on the image formation model to estimate the scene radiance and transmission map. In concept, the dark channel prior is based on the observation that in most non-sky patches of haze-free outdoor images, at least one color channel has very low intensity values. To implement this, we calculate the dark channel image J d a r k ( x ) , represented in Table 3 below.
Estimate the atmospheric light A from the top 0.1% brightest pixels in the dark channel.
Table 4 represents the experimental setup for this research work. The software environment is built up on Ubuntu 22.04.3 LTS, a 64-bit operating system. CUDA 12.0 is utilized to advantage GPU acceleration for DL tasks, facilitating faster training and model optimization. We run the system on the Linux kernel, ensuring compatibility with the latest hardware drivers and software packages.

4. Experimental Results

In this work, we developed a ship fire detection model by fine-tuning the YOLO-v10m model, as shown results in Figure 6, Figure 7 and Figure 8. Then, to develop detection accuracy, we applied a dehazing algorithm. Table 5 shows different model performances. Bold sentences mean a higher accuracy achievement of the models.

Evaluation Metrics

Performance metrics are essential tools in evaluating the efficacy of a proposed approach or model, especially in the context of specific issues, data characteristics, and analysis objectives. These metrics provide a quantitative basis for assessing how well a model performs by comparing its predictions to the actual outcomes. The general acceptance of a model’s accuracy is often measured through various computation metrics involving correctly and incorrectly classified examples as we applied in our previous research works [71,72,73,74,75,76,77,78,79]. Key metrics are as follows (Table 6):
Figure 9 depicts the model’s metric achievements with regard to the “Fire” (orange line) and “Non-Fire” (blue line) classes. Overall achievements such as the measurement of true positive detection out of all positive detection (Figure 9a), all relevant instances of ship fires in the proposition of true positives out of the actual positives (Figure 9b), the harmonic mean of precision and recall (Figure 9c), and the mAP (Figure 9d) scores achieve high accuracy results in detecting ship fires.
Figure 10 shows iterative improvement of a hazy image through scene radiance, transmission map, and atmospheric light calculations. As these calculations become more accurate with better feature extraction from the scene with each iteration, the hazy image is progressively restored, revealing a clearer and more accurate representation of J(x).
Table 7 presents the key metrics, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Lightness Order Error (LOE) to evaluate the performance of the dehazing algorithm applied to a hazy ocean scene. The PSNR value of 27.50 dB indicates the quality of the dehazed image. On average, a score above 25 dB indicates good image quality, meaning the algorithm effectively reduced haze while preserving important image details. The SSIM score of 0.918 reflects the similarity between the dehazed and the original image. Our achieved score signifies that the dehazed image retains most of the structural information and visual fidelity of the original scene. The last LOE value of 116.99 measures the degree of lightness distortion introduced by the dehazing process. Although some distortion is inevitable, the achieved LOE suggests that the algorithm maintained a reasonable balance between dehazing and lightness preservation. Figure 11 depicts bounding box based ship detection examples in various ocean environments.
Table 8 is showing comparative analysis of similar works with other research works for ship fire detection. Our proposed method is outperforming in precision, recall and [email protected] metric achievements.

5. Discussion and Limitations

From Table 5 and Table 8, it can be seen that the YOLO-v10-based fine-tuned ship fire detection model is performing relatively well with other SOTA and YOLO algorithms. Specifically, our trained model achieves the highest score in precision and recall metrics, 0.977 and 0.98, respectively. Analysis shows that the YOLO-v8 model is outperforming in F1 score achievement at about 0.08% higher. Nevertheless, in the [email protected] score, or in average precision at an IoU (Intersection over Union) of 0.50, the proposed approach is comparatively higher than other algorithms. The threshold of 0.50 means that a predicted bounding box is considered a correct detection if it is an IoU with the ground truth box set at least at a 0.50 value. Regarding the “no-fire” class, we achieved lower scores in all metrics. One of the main reasons for that might be the limitation of that class’s dataset. In our future work, we will focus on augmenting ship images to increase data size. Overall, the fine-tuned YOLO-v10m model with an integrated dehazing algorithm represents a valuable solution for enhancing maritime safety. The integration of the dehazing algorithm enables the model to function effectively in conditions where traditional models struggle. This adaptability enhanced the model’s utility across a broader range of maritime scenarios. However, YOLO-v10m model implementation is computationally more intensive compared to older versions of YOLO. This increased complexity poses challenges for deployment, such as on small vessels with limited processing power. Moreover, the F1 score achievement of our model is slightly lower than the Yolo-v8n model. In our future research, we will focus more on detection improvement by combining other algorithms to handle extreme weather conditions, such as heavy fog, rain, and intense sunlight factors.

6. Conclusions

From the analysis, we tackled the challenging problem of ship fire detection in maritime environments, where factors like water vapor and high humidity pose significant obstacles for CV-based approaches. Our contribution includes developing a custom ship fire dataset, including ship on fire and ship not on fire images, and fine-tuning the latest version of the YOLO-v10 model in combination with a dehazing algorithm. Our approach leverages the strengths of DL and sophisticated image processing techniques to address the unique conditions of the maritime environment. The integration of the dehazing method significantly improved the visual clarity of ship images, thereby enhancing the performance of the YOLOv10 model. The experimental results represent that the effectiveness of this approach is remarkable in precision, recall, and mAP scores. When compared with other SOTA models, our YOLO-v10-based ship fire detection model consistently outperformed several YOLO variants and other detection models. In conclusion, our proposed approach provides a powerful and comprehensive solution for ship fire detection. Future work will aim to address the identified limitations and further enhance the model’s capabilities, ensuring robust and accurate fire detection in various maritime conditions.

Author Contributions

F.A. conceived this study, conducted the research, developed the methodology and experimental analysis, and wrote the manuscript. A.A. and R.N. contributed valuable advice and feedback for research development. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Available online: https://www.worldshipping.org/ (accessed on 12 June 2024).
  2. Available online: https://commercial.allianz.com/news-and-insights/news/safety-shipping-review-2023.html (accessed on 12 June 2024).
  3. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  4. Kim, D.; Ruy, W. CNN-based fire detection method on autonomous ships using composite channels composed of RGB and IR data. Int. J. Nav. Arch. Ocean Eng. 2022, 14, 100489. [Google Scholar] [CrossRef]
  5. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional Neural Networks Based Fire Detection in Surveillance Videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  6. Yin, Z.; Wan, B.; Yuan, F.; Xia, X.; Shi, J. A deep normalization and convolutional neural network for image smoke detection. IEEE Access 2017, 5, 18429–18438. [Google Scholar] [CrossRef]
  7. Available online: https://www.firehouse.com/home/news/10573606/cargo-ship-catches-fire-at-savannah-port (accessed on 12 June 2024).
  8. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
  9. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing, 2004. ICIP’04, Singapore, 24–27 October 2004; Volume 3, pp. 1707–1710. [Google Scholar]
  10. Prema, C.E.; Vinsley, S.S.; Suresh, S. Efficient flame detection based on static and dynamic texture analysis in forest fire detection. Fire Technol. 2017, 54, 255–288. [Google Scholar] [CrossRef]
  11. Wu, D.; Zhang, C.; Ji, L.; Ran, R.; Wu, H.; Xu, Y. Forest fire recognition based on feature extraction from multi-view images. Trait. Du Signal 2021, 38, 775–783. [Google Scholar] [CrossRef]
  12. Qu, N.; Li, Z.; Li, X.; Zhang, S.; Zheng, T. Multi-parameter fire detection method based on feature depth extraction and stacking ensemble learning model. Fire Saf. J. 2022, 128, 103541. [Google Scholar] [CrossRef]
  13. Li, X.; He, S.; He, D.; Li, C.; Li, J. Real-Time Flame Detection Based on Video Semantic Statistical Features. In Proceedings of the 2023 8th International Conference on Computer and Communication Systems (ICCCS), Guangzhou, China, 21–23 April 2023. [Google Scholar]
  14. Xu, F.; Zhang, X.; Deng, T.; Xu, W. An image-based fire monitoring algorithm resistant to fire-like objects. Fire 2023, 7, 3. [Google Scholar] [CrossRef]
  15. Cui, Y.; Dong, H.; Zhou, E. An early fire detection method based on smoke texture analysis and discrimination. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; IEEE: New York, NY, USA, 2008. [Google Scholar]
  16. Ye, W.; Zhao, J.; Wang, S.; Wang, Y.; Zhang, D.; Yuan, Z. Dynamic texture based smoke detection using Surfacelet transform and HMT model. Fire Saf. J. 2015, 73, 91–101. [Google Scholar] [CrossRef]
  17. Chunyu, Y.; Yongming, Z.; Jun, F.; Jinjun, W. Texture analysis of smoke for real-time fire detection. In Proceedings of the 2009 Second International Workshop on Computer Science and Engineering, Qingdao, China, 28–30 October 2009; IEEE: New York, NY, USA, 2009. [Google Scholar]
  18. Chino, D.Y.T.; Avalhais, L.P.S.; Rodrigues, J.F.; Traina, A.J.M. Bowfire: Detection of fire in still images by integrating pixel color and texture analysis. In Proceedings of the 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
  19. Foggia, P.; Saggese, A.; Vento, M. Real-Time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  20. Premal, C.E.; Vinsley, S. Image processing based forest fire detection using ycbcr colour model. In Proceedings of the 2014 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2014], Nagercoil, India, 20–21 March 2014; pp. 1229–1237. [Google Scholar]
  21. Wu, H.; Hu, Y.; Wang, W.; Mei, X.; Xian, J. Ship Fire Detection Based on an Improved YOLO Algorithm with a Lightweight Convolutional Neural Network Model. Sensors 2022, 22, 7420. [Google Scholar] [CrossRef] [PubMed]
  22. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. Medical image computing and computer-assisted intervention—MICCAI 2015. In Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18. Springer International Publishing, 2015. [Google Scholar]
  23. Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest fire segmentation from Aerial Imagery data Using an improved instance segmentation model. Remote Sens. 2022, 14, 3159. [Google Scholar] [CrossRef]
  24. Zheng, Y.; Zhang, G.; Tan, S.; Yang, Z.; Wen, D.; Xiao, H. A forest fire smoke detection model combining convolutional neural network and vision transformer. Front. For. Glob. Chang. 2023, 6, 1136969. [Google Scholar] [CrossRef]
  25. De Carolis, G.; Adamo, M.; Pasquariello, G. On the estimation of thickness of marine oil slicks from sun-glittered, near-infrared meris and modis imagery: The lebanon oil spill case study. IEEE Trans. Geosci. Remote Sens. 2013, 52, 559–573. [Google Scholar] [CrossRef]
  26. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 886–893. [Google Scholar] [CrossRef]
  27. Muksimova, S.; Umirzakova, S.; Mardieva, S.; Cho, Y.-I. Enhancing Medical Image Denoising with Innovative Teacher–Student Model-Based Approaches for Precision Diagnostics. Sensors 2023, 23, 9502. [Google Scholar] [CrossRef]
  28. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach. Sensors 2023, 23, 1512. [Google Scholar] [CrossRef]
  29. Farkhod, A.; Abdusalomov, A.B.; Mukhiddinov, M.; Cho, Y.-I. Development of Real-Time Landmark-Based Emotion Recog-nition CNN for Masked Faces. Sensors 2022, 22, 8704. [Google Scholar] [CrossRef]
  30. Lampert, C.H.; Blaschko, M.B.; Hofmann, T. Beyond sliding windows: Object localization by efficient subwindow search. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  31. van de Sande, K.E.A.; Uijlings, J.R.R.; Gevers, T.; Smeulders, A.W.M. Segmentation as selective search for object recognition. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 1879–1886. [Google Scholar]
  32. Chen, Y.; Li, Y.; Wang, J. An End-to-end oil-spill monitoring method for multisensory satellite images based on deep semantic segmentation. Sensors 2020, 20, 725. [Google Scholar] [CrossRef]
  33. Shidik, G.F.; Adnan, F.N.; Supriyanto, C.; Pramunendar, R.A.; Andono, P.N. Multicolor feature, background subtraction and time frame selection for fire detection. In Proceedings of the 2013 International Conference on Robotics, Biomimetics, Intelligent Computational Systems, Jogjakarta, Indonesia, 25–27 November 2013; pp. 115–120. [Google Scholar]
  34. Celik, T.; Demirel, H. Fire detection in video sequences using a generic color model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  35. Vipin, V. Image processing based forest fire detection. Int. J. Emerg. Technol. Adv. Eng. 2012, 2, 87–95. [Google Scholar]
  36. Khalil, A.; Rahman, S.U.; Alam, F.; Ahmad, I.; Khalil, I. Fire Detection Using Multi Color Space and Background Modeling. Fire Technol. 2020, 57, 1221–1239. [Google Scholar] [CrossRef]
  37. Shen, D.; Chen, X.; Nguyen, M.; Yan, W.Q. Flame detection using deep learning. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 416–420. [Google Scholar]
  38. Qian, H.; Shi, F.; Chen, W.; Ma, Y.; Huang, M. A fire monitoring and alarm system based on channel-wise pruned YOLOv3. Multimed. Tools Appl. 2021, 81, 1833–1851. [Google Scholar] [CrossRef]
  39. Wang, Y.; Hua, C.; Ding, W.; Wu, R. Real-time detection of flame and smoke using an improved YOLOv4 network. Signal Image Video Process. 2022, 16, 1109–1116. [Google Scholar] [CrossRef]
  40. Wu, Z.; Xue, R.; Li, H. Real-Time Video Fire Detection via Modified YOLOv5 Network Model. Fire Technol. 2022, 58, 2377–2403. [Google Scholar] [CrossRef]
  41. Xue, Z.; Lin, H.; Wang, F. A Small Target Forest Fire Detection Model Based on YOLOv5 Improvement. Forests 2022, 13, 1332. [Google Scholar] [CrossRef]
  42. Poobalan, K.; Liew, S.-C. Fire detection based on color filters and Bag-of-Features classification. In Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
  43. Abdusalomov, A.B.; Nasimov, R.; Nasimova, N.; Muminov, B.; Whangbo, T.K. Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm. Sensors 2023, 23, 3440. [Google Scholar] [CrossRef]
  44. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384. [Google Scholar] [CrossRef]
  45. Akmalbek, A.; Djurayev, A. Robust shadow removal technique for improving image enhancement based on segmentation method. IOSR J. Electron. Commun. Eng. 2016, 11, 17–21. [Google Scholar]
  46. Shakhnoza, M.; Sabina, U.; Sevara, M.; Cho, Y.-I. Novel Video Surveillance-Based Fire and Smoke Classification Using Attentional Feature Map in Capsule Networks. Sensors 2021, 22, 98. [Google Scholar] [CrossRef]
  47. Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An Improvement of the Fire Detection and Classification Method Using YOLOv3 for Surveillance Systems. Sensors 2021, 21, 6519. [Google Scholar] [CrossRef] [PubMed]
  48. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors 2022, 22, 3307. [Google Scholar] [CrossRef] [PubMed]
  49. Wang, S.-J.; Jeng, D.-L.; Tsai, M.-T. Early fire detection method in video for vessels. J. Syst. Softw. 2009, 82, 656–667. [Google Scholar] [CrossRef]
  50. Halevy, A.; Norvig, P.; Pereira, F. The Unreasonable Effectiveness of Data. IEEE Intell. Syst. 2009, 24, 8–12. [Google Scholar] [CrossRef]
  51. Sun, C.; Shrivastava, A.; Singh, S.; Gupta, A. Revisting unreasonable effectivness of data in deep learning era. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 843–852. [Google Scholar]
  52. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classifcation with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1106–1114. [Google Scholar]
  53. Karen, S.; Andrew, Z. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  54. Kaiming, H.; Xiangyu, Z.; Shaoqing, R.; Jian, S. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  55. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  56. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar] [CrossRef]
  57. Schechner, Y.; Narasimhan, S.; Nayar, S. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  58. Shwartz, S.; Namer, E.; Schechner, Y. Blind haze separation. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Volume 2 (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 1984–1991. [Google Scholar]
  59. Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen-Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. (TOG) 2008, 27, 1–10. [Google Scholar] [CrossRef]
  60. Narasimhan, S.G.; Nayar, S.K. Interactive (de) weathering of an image using physical models. In Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision, Nice, France, 12 October 2003. [Google Scholar]
  61. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar] [CrossRef]
  62. Kuldashboy, A.; Umirzakova, S.; Allaberdiev, S.; Nasimov, R.; Abdusalomov, A.; Cho, Y.I. Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach. Heliyon 2024, 10, e34376. [Google Scholar] [CrossRef]
  63. Yuldashev, Y.; Mukhiddinov, M.; Abdusalomov, A.B.; Nasimov, R.; Cho, J. Parking Lot Occupancy Detection with Improved MobileNetV3. Sensors 2023, 23, 7642. [Google Scholar] [CrossRef]
  64. Chi, R.; Lu, Z.M.; Ji, Q.G. Real-time multi-feature based fire flame detection in video. IET Image Process. 2017, 11, 31–37. [Google Scholar] [CrossRef]
  65. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
  66. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  67. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  68. Ultralytics, YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 11 November 2023).
  69. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  70. Ultralytics, YOLOv8. Available online: https://github.com/ultralytics/ultralytics (accessed on 11 November 2023).
  71. Nasimov, R.; Kumar, D.; Rizwan, M.; Panwar, A.K.; Abdusalomov, A.; Cho, Y.-I. A Novel Approach for State of Health Estimation of Lithium-Ion Batteries Based on Improved PSO Neural Network Model. Processes 2024, 12, 1806. [Google Scholar] [CrossRef]
  72. Umirzakova, S.; Whangbo, T.K. Detailed feature extraction network-based fine-grained face segmentation. Knowl. Based Syst. 2022, 250, 109036. [Google Scholar] [CrossRef]
  73. Khan, S.; Inayat, K.; Muslim, F.B.; Shah, Y.A.; Atif Ur Rehman, M.; Khalid, A.; Imran, M.; Abdusalomov, A. Securing the IoT ecosystem: ASIC-based hardware realization of Ascon lightweight cipher. Int. J. Inf. Secur. 2024. [Google Scholar] [CrossRef]
  74. Umirzakova, S.; Ahmad, S.; Khan, L.U.; Whangbo, T. Medical image super-resolution for smart healthcare applications: A comprehensive survey. Inf. Fusion 2024, 103, 102075. [Google Scholar] [CrossRef]
  75. Makhmudov, F.; Kultimuratov, A.; Cho, Y.-I. Enhancing Multimodal Emotion Recognition through Attention Mechanisms in BERT and CNN Architectures. Appl. Sci. 2024, 14, 4199. [Google Scholar] [CrossRef]
  76. Makhmudov, F.; Kutlimuratov, A.; Akhmedov, F.; Abdallah, M.S.; Cho, Y.-I. Modeling Speech Emotion Recognition via Attention-Oriented Parallel CNN Encoders. Electronics 2022, 11, 4047. [Google Scholar] [CrossRef]
  77. Umirzakova, S.; Mardieva, S.; Muksimova, S.; Ahmad, S.; Whangbo, T. Enhancing the Super-Resolution of Medical Images: Introducing the Deep Residual Feature Dis-tillation Channel Attention Network for Optimized Performance and Efficiency. Bioengineering 2023, 10, 1332. [Google Scholar] [CrossRef]
  78. Safarov, F.; Akhmedov, F.; Abdusalomov, A.B.; Nasimov, R.; Cho, Y.I. Real-Time Deep Learning-Based Drowsiness Detection: Leveraging Computer-Vision and Eye-Blink Analyses for Enhanced Road Safety. Sensors 2023, 23, 6459. [Google Scholar] [CrossRef]
  79. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.-I. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. Sensors 2023, 23, 8374. [Google Scholar] [CrossRef]
  80. Park, K.-M.; Bae, C.-O. A Study on Fire Detection in Ship Engine Rooms Using Convolutional Neural Network. J. Korean Soc. Mar. Environ. Saf. 2019, 25, 476–481. [Google Scholar] [CrossRef]
  81. Avazov, K.; Jamil, M.K.; Muminov, B.; Abdusalomov, A.B.; Cho, Y.-I. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. Sensors 2023, 23, 7078. [Google Scholar] [CrossRef] [PubMed]
  82. Zhu, J.; Zhang, J.; Wang, Y.; Ge, Y.; Zhang, Z.; Zhang, S. Fire Detection in Ship Engine Rooms Based on Deep Learning. Sensors 2023, 23, 6552. [Google Scholar] [CrossRef] [PubMed]
  83. Zhang, Z.; Tan, L.; Tiong, R.L.K. Ship-Fire Net: An improved YOLOv8 algorithm for ship fire detection. Sensors 2024, 24, 727. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Consistent dual assignment for NMS-free training [8].
Figure 1. Consistent dual assignment for NMS-free training [8].
Fire 07 00332 g001
Figure 2. YOLO-v10 model architecture with CIB integration.
Figure 2. YOLO-v10 model architecture with CIB integration.
Fire 07 00332 g002
Figure 3. Hazy ship image examples in (a) and ship on fire in (b).
Figure 3. Hazy ship image examples in (a) and ship on fire in (b).
Fire 07 00332 g003
Figure 4. Data augmentation application to custom dataset.
Figure 4. Data augmentation application to custom dataset.
Fire 07 00332 g004
Figure 5. Dehazing integration with fine-tuned YOLO-v10 ship fire detection algorithm.
Figure 5. Dehazing integration with fine-tuned YOLO-v10 ship fire detection algorithm.
Fire 07 00332 g005
Figure 6. Example model training detection shown in (a); validating detection shown in (b).
Figure 6. Example model training detection shown in (a); validating detection shown in (b).
Fire 07 00332 g006
Figure 7. Dataset confusion matrix table of model training and data distribution with classes.
Figure 7. Dataset confusion matrix table of model training and data distribution with classes.
Fire 07 00332 g007
Figure 8. Model training and validation loss for 100 epochs and metric accuracy results.
Figure 8. Model training and validation loss for 100 epochs and metric accuracy results.
Fire 07 00332 g008
Figure 9. Evaluation metric performance in line graphs, in precision (a), recall (b), F1 score (c), and mAP score (d).
Figure 9. Evaluation metric performance in line graphs, in precision (a), recall (b), F1 score (c), and mAP score (d).
Fire 07 00332 g009
Figure 10. Iterative hazy image dehazing process for ocean environment.
Figure 10. Iterative hazy image dehazing process for ocean environment.
Fire 07 00332 g010
Figure 11. Ship fire detection examples.
Figure 11. Ship fire detection examples.
Fire 07 00332 g011
Table 1. Image classification improvement after data augmentation method application.
Table 1. Image classification improvement after data augmentation method application.
DatasetModelWithout Augmentation (%)With Augmentation (%)Average Accuracy Improvement (%)
CIFAR-10DenseNet94.1594.590.44
Wide-ResNet93.3493.671.33
Shake-ResNet93.794.841.11
CIFAR-100DenseNet74.9875.930.95
Wide-ResNet74.4676.522.06
Shake-ResNet73.9676.762.80
SVHNDenseNet97.9197.980.07
Wide-ResNet98.2398.310.80
Shake-ResNet98.3798.400.30
Table 2. Data distribution for model training.
Table 2. Data distribution for model training.
DatasetTraining RatioValidation RatioTotal Images
Fire0.80%0.20%9235
Non-Fire0.80% 0.20%3372
Table 3. Dehazing technique equation explanation.
Table 3. Dehazing technique equation explanation.
Explanation
Calculate the dark channel image J d a r k ( x ) = min y ( x ) ( m i n c { r , g , b } J c ( y ) ) (4)
Estimate the transmission map t x t ( x ) = 1 ω * m i n y   ( x ) ( m i n c { r , g , b } I c ( y ) A c )(5)
Recover the scene radiance J ( x ) J ( x ) = I ( x ) A t ( x ) + A (6)
Table 4. Software and hardware configuration.
Table 4. Software and hardware configuration.
ConfigurationVersions
Hardware modelASRock X399 Taichi
Memory32.0 GiB
ProcessorAMD Ryzen™ Threadripper™ 1950X × 32
GraphicsNVIDIA GeForce GTX 1080 Ti
Operating systemUbuntu 23.04
Operating system type64-bit
ToolkitCUDA 12.0
Kernel versionLinux 6.2.0-37-generic
Table 5. Comparative analysis of ship fire detection metrics with SOTA algorithms. Bold letters show high accuracy results.
Table 5. Comparative analysis of ship fire detection metrics with SOTA algorithms. Bold letters show high accuracy results.
ModelsPrecisionRecallF1[email protected]
ResNet [62]0.5240.5310.530.494
MobileNet_V1 [63]0.6750.5420.600.581
SSD Inception_V2 [64]0.6890.7010.690.644
Swin transformer [65]0.7410.6810.710.623
DenseNet [66]0.6710.6120.640.563
YOLO-v3-Tiny [67] 0.7970.8210.810.745
YOLO-v5s [68]0.9010.8550.880.814
YOLO-v7-tiny [69]0.8570.79210.820.758
YOLO-v8n [70]0.8970.9070.900.856
YOLO-v10m based (Ours) 0.9770.980.850.897
Table 6. Evaluation metrics.
Table 6. Evaluation metrics.
True Positive ( T P ):The number of instances correctly identified as belonging to the positive class
True Negative ( T N ):The number of instances correctly identified as not belonging to the positive class
False Positive ( F P ):The number of instances incorrectly identified as belonging to the positive class
False Negative ( F N ):The number of instances that belong to the positive class but were not recognized as such by the model
Table 7. Comparative results of dehazing algorithm application to a single image.
Table 7. Comparative results of dehazing algorithm application to a single image.
PSNR27.499799719517696 dB
SSIM0.917715548595512
LOE116.98623389929742
Table 8. Comparative analysis of ship fire detection model with other similar approaches. Bold letters show high accuracy results.
Table 8. Comparative analysis of ship fire detection model with other similar approaches. Bold letters show high accuracy results.
NameBased AlgorithmPrecisionRecallF1[email protected]
Park et al. [80]Tiny-YOLOv20.8130.7990.8060.789
Wu et al. [21]YOLOv4-tiny0.8630.8360.8490.804
Avazov et al. [81]YOLOv70.8680.8520.8600.813
Zhu et al. [82]YOLOv7-tiny0.8720.8690.8700.845
Ship-Fire Net [83]YOLOv8n0.9310.9380.9340.897 (fire—0.911, smoke—0.884)
OursYOLO-v10 (m-version)0.9770.980.850.897 (fire—0.820, no fire—0.97)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akhmedov, F.; Nasimov, R.; Abdusalomov, A. Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection. Fire 2024, 7, 332. https://doi.org/10.3390/fire7090332

AMA Style

Akhmedov F, Nasimov R, Abdusalomov A. Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection. Fire. 2024; 7(9):332. https://doi.org/10.3390/fire7090332

Chicago/Turabian Style

Akhmedov, Farkhod, Rashid Nasimov, and Akmalbek Abdusalomov. 2024. "Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection" Fire 7, no. 9: 332. https://doi.org/10.3390/fire7090332

APA Style

Akhmedov, F., Nasimov, R., & Abdusalomov, A. (2024). Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection. Fire, 7(9), 332. https://doi.org/10.3390/fire7090332

Article Metrics

Back to TopTop