Next Article in Journal
The Effect of Yellowing on the Corrosion Resistance of Chromium-Free Fingerprint-Resistant Hot-Dip Al-Zn-Coated Steel
Previous Article in Journal
Methylene Blue Removal Using Activated Carbon from Olive Pits: Response Surface Approach and Artificial Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels

1
Department of Automation, Taiyuan Institute of Technology, Taiyuan 030008, China
2
Shanxi Energy Internet Research Institute, Taiyuan 030032, China
3
College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China
4
Key Laboratory of Cleaner Intelligent Control on Coal & Electricity, Ministry of Education, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(2), 349; https://doi.org/10.3390/pr13020349
Submission received: 3 January 2025 / Revised: 23 January 2025 / Accepted: 25 January 2025 / Published: 27 January 2025
(This article belongs to the Section Energy Systems)

Abstract

:
In view of the problems that mean that existing detection networks are not effective in detecting dynamic targets such as wildfire smoke, a lightweight dynamically enhanced transmission line channel wildfire smoke detection network LDENet is proposed. Firstly, a Dynamic Lightweight Conv Module (DLCM) is devised within the backbone network of YOLOv8 to enhance the perception of flames and smoke through dynamic convolution. Then, the Ghost Module is used to lightweight the model. DLCM reduces the number of model parameters and improves the accuracy of wildfire smoke detection. Then, the DySample upsampling operator is used in the upsampling part to make the image generation more accurate with very few parameters. Finally, in the course of the training process, the loss function is improved. EMASlideLoss is used to improve detection ability for small targets, and the Shape-IoU loss function is used to optimize the shape of wildfires and smoke. Experiments are conducted on wildfire and smoke datasets, and the final mAP50 is 86.6%, which is 1.5% higher than YOLOv8, and the number of parameters is decreased by 29.7%. The experimental findings demonstrate that LDENet is capable of effectively detecting wildfire smoke and ensuring the safety of transmission line corridors.

1. Introduction

Safety monitoring in transmission line channels is a key link in power system inspections, which plays an important role in maintaining the continuity and stability of transmission [1,2]. Since most transmission lines are distributed in deep mountains and jungles and often pass through flammable areas with lush vegetation such as forests, fires are extremely likely to occur. Not only do wildfires cause damage to power facilities such as transmission towers and transmission lines, causing power supply interruptions, but if they are not handled in time, the fire will spread and cause large-scale forest fires [3]. Given the rapid spread of wildfires, there is an urgent need for early detection and timely response. However, due to the wide distribution of transmission lines, traditional manual inspection methods have limited coverage and low operational effectiveness, making it difficult to effectively meet the challenge of wildfire detection in transmission line channels [4]. With the advancement of automated detection technology, its wide detection range and continuous monitoring capabilities are providing new solutions for the timely detection and early warning of forest fires and are expected to significantly improve the efficiency and effectiveness of the safety monitoring of transmission line channels.
Limited by the unique shapes of objects such as wildfires and smoke, detecting wildfire smoke is a challenging computer vision task [5,6]. First of all, wildfires often occur against complex backgrounds such as forests and mountains. There is a large amount of useless background interference during detection, which brings challenges to wildfire detection [7]. Secondly, flames and smoke are shapeless objects with fuzzy sizes and shapes. This characteristic makes them significantly different from conventional object detection tasks [8]. Whether the object is a minor flame at the initial stage of a fire or a large-scale flame, higher requirements are placed on the generalization ability of the detection model. Although existing wildfire smoke detection technology has achieved some advancements, the above challenges still need to be resolved to improve the accuracy and reliability of detection.
This study was designed to address the challenges in detecting wildfires and smoke in transmission line channels. Although traditional feature-based detection methods have attempted to identify wildfires and smoke using features such as color, shape, and texture, such algorithms based on prior knowledge struggle to eliminate interference from objects such as red objects and clouds that resemble flames and smoke, resulting in low detection accuracy. Although current depth learning detection technology can obtain complex image information by virtue of the automatic feature extraction ability, most algorithms do not fully consider the needs of edge computing in their design, lack lightweight design for resource-constrained edge equipment, and ignore the dynamic characteristics of wildfire smoke that lead the appearance of small targets and changeable shapes at a distance, and the detection performance is also poor in complex terrain conditions.
Specifically, the goal is to develop a detection algorithm that can accurately identify wildfire smoke even though the shape of the target is not fixed. Our goal is to improve detection accuracy while ensuring lightweight algorithms, which is crucial in their deployment using edge devices in the field. By doing so, we hope to improve the efficiency and effectiveness of safety monitoring in transmission line channels and minimize the potential damage caused by wildfires to power facilities and power outages.
Therefore, this article proposes a lightweight dynamic smoke and fire detection algorithm for transmission line channels without fixed shapes such as flames and smoke, which improves detection accuracy while meeting lightweight requirements and facilitates the deployment of edge devices. This paper conducts a lightweight and dynamic design based on YOLOv8 [9]. On the one hand, a dynamic module is incorporated into the network structure, and on the other hand, lightweight model parameters are maintained to ensure real-time detection. The principal achievements of this paper are as follows:
1. A dynamic lightweight convolution module is put forward, which combines dynamic convolution with a lightweight model to enhance the backbone network’s ability to perceive flames and smoke.
2. In the upsampling section, a dynamic upsampling technique is employed to enhance the model’s detection capacity with the addition of only a small number of parameters.
3. The loss was improved to increase the attention to objects such as flames, and shape optimization was added when performing bounding box regression.
4. We designed a lightweight dynamically enhanced network (LDENet) to detect smoke from wildfires in transmission line corridors. We performed experiments on the dataset and obtained an mAP50 value of 86.6%, with the number of parameters being reduced by 29.6%.
This paper is organized as follows: The first chapter comprehensively introduces the background and significance of detecting wildfire smoke in transmission line channels, emphasizing the potential threat of wildfires to transmission infrastructure and their related impact on electricity supply. Section 2 provides a detailed overview of the most advanced methods for detecting wildfire smoke, carefully studying various existing technologies, and, through comprehensive analysis, clearly demonstrating the advantages and disadvantages of these methods. In Section 3, LDENet and its innovative aspects are described in detail. The focus is on DLCM, DySample, and loss function. Section 4 focuses on a series of different experiments. These experiments were carefully designed and rigorously validated the effectiveness of the proposed model. Finally, this article summarizes the main findings and contributions, pointing out the shortcomings of the research and clarifying future research directions.

2. Related Works

At present, the detection of wildfire smoke in images can be categorized into traditional feature-based detection approaches [10] and deep learning-based wildfire smoke detection techniques [11]. Traditional feature-based detection approaches primarily utilize color [12], shape [13], and texture [14] for the identification of wildfires and smoke.
Since forest fires have obvious color features, the color information in the image can be classified using color spaces such as RGB and HSV to detect whether a forest fire has occurred. Yuan et al. [15] used the image difference in the RGB model to detect smoke and introduced the extended Kalman filter online reshaping detection method to enhance the generalization ability of the model. Sudhakar et al. [16] converted the RGB image to the Lab color model and then set the color threshold based on the unique color characteristics of the flame to identify forest fires. In addition to using color information to detect forest fire smoke, artificially designed feature descriptors can likewise be employed to extract image features, and classification techniques can be utilized for the detection of forest fire smoke. Dala et al. [17] utilized local binary patterns (LBPs) [18] to obtain image texture features and then integrated them into a hybrid model. LBPs brought additional texture information, which effectively improved the detection accuracy in complex environments. Nyma et al. [19] put forward a local binary co-occurrence model (RGB_LBCoP), which combined local binary patterns with texture co-occurrence features in the RGB color space to depict smoke features and finally used a support vector machine (SVM) [20] for smoke classification and recognition. Traditional feature-based wildfire smoke detection methods can detect forest fires and smoke by combining features such as color and texture. However, this type of detection algorithm based on prior knowledge cannot exclude objects similar to flames and smoke, such as red objects and clouds, and the detection accuracy for wildfires is not high.
The image features extracted by traditional feature detection methods are relatively simple, while deep learning-based detection methods can acquire more complex image information by using their automatic feature extraction capabilities. In accordance with the distinct detection methods, they can be divided into wildfire smoke detection methods founded on image classification, target detection, and semantic segmentation. Classification-based wildfire smoke detection performs detection by training a classifier. Gong et al. [21] proposed a dark channel-assisted hybrid attention method to integrate dark channel information into the neural network to improve the ability to distinguish smoke. Khan et al. [22] designed a stacked encoding efficient network, SE-EFFNet, which used EfficientNet [23] as the network backbone and added residual connections to ensure the accuracy of fire identification. Recently, some researchers have also used semantic segmentation methods to detect wildfire smoke. Hu et al. [24] proposed a segmentation algorithm GFUNet based on the U-Net [25] architecture. The algorithm integrated deep separable pyramid pooling, spatial channel attention algorithm, etc. It was verified on the grassland fire smoke dataset and could effectively segment the smoke area. Yuan et al. [26] proposed a Newton interpolation network to extract image information by analyzing the feature values in the encoded feature maps at the same position but on different scales.
At present, most researchers use target detection algorithms to detect wildfires and smoke. Object detection algorithms can be divided into two-stage object detection and single-stage object detection. Representative two-stage object detection algorithms include R-CNN [27], Fast R-CNN [28], etc. Maroua et al. [29] proposed a new two-stage target detection algorithm based on Faster R-CNN [30] for detecting forest fires and smoke. It added a hybrid feature extractor to the backbone network to provide a larger number of feature maps. Zhang et al. [31] proposed a multi-scale feature extraction model for small-target wildfire detection and introduced an attention module to the region candidate network, so that the model paid more attention to the semantics and location information of small targets. The design of the two-stage detection algorithm is fairly complicated; the model has more parameters, and the detection speed is slow. Therefore, the single-stage target detection algorithm has become the mainstream. The single-stage target detection algorithm directly generates the prediction box and category information, and the representative one is YOLO (You Only Look Once) [32]. Typical algorithms in the YOLO series include YOLOv3 [33], v5, v8, and the latest v11. Yang et al. [34] proposed a network for detecting wildfire smoke. They added the Swin transformer [35] detection head to the neck of YOLOv5 to improve the detection accuracy of small-target smoke. Yuan et al. [36] designed FS-YOLO, which more accurately captured flame features by integrating cross-stage hybrid attention, a pyramid network, and pooling methods. Huang et al. [37] proposed a wildfire detection model. They used GhostNet as the backbone network and added RepViTBlock to the neck to enhance the ability to extract image features. They conducted experiments on a wildfire dataset around power lines and achieved good detection results. Alkhammash et al. [38] conducted a comparative analysis of the application of YOLOv9, v10, and v11 in smoke and fire detection. The experiment showed that YOLOv11n performed well in accuracy, recall, and other indicators on specific datasets. Mamadaliev et al. [39] proposed the ESFD-YOLOv8n model for early smoke and fire detection, which improved the effectiveness of flame detection by replacing the C2f module and adopting WIoUv3 loss function. But its effectiveness in detecting long-distance fires is poor, and there may be missed or false detections. Muksimova et al. [40] proposed an improved Miti DETR model based on drones for wildfire detection, which redesigned the AlexNet backbone network and added new mechanisms but still needs to be strengthened in terms of adaptability to complex environments. Sun et al. [41] proposed the Smoke DETR model, which improves the network’s ability to extract smoke features by introducing ECPConv and EMA modules based on RT-DETR. Wang et al. [42] introduced MS Transformer into the YOLOv7 architecture to enhance the flow of feature information in the model. However, the addition of transformer architecture greatly increases the parameter and computational complexity of the model, and the weak computing power of edge devices cannot manage the inference cost of these models.
There are multiple key issues in the fire detection algorithm mentioned above. On the one hand, most algorithms do not fully consider the needs of edge computing in their design and lack lightweight design for resource-constrained edge devices, resulting in a massive demand for computing resources, an obstacle that complicates their deployment using edge devices. On the other hand, in the task of monitoring wildfires and smoke in transmission line channels, wildfires and smoke are mostly small targets at long distances with significant morphological changes. However, existing algorithms have not taken into account the dynamic morphological changes in wildfire smoke. In addition, these algorithms show poor detection performance under complex terrain conditions. Therefore, we propose a dynamic and lightweight YOLO detection algorithm for detecting long-distance small targets such as wildfires and smoke against complex backgrounds within transmission line channels. The dynamic features of wildfires and smoke are extracted using dynamic lightweight convolution modules and dynamic upsampling modules. The optimized loss function can improve the detection accuracy with small targets. Multiple complex datasets are used for training during the model training process to enhance the detection ability of the model in the face of complex backgrounds, ultimately reducing the number of model parameters while improving the detection accuracy.

3. Materials and Methods

3.1. YOLOv8

YOLOv8 is a version of YOLO launched by Ultralytics in early 2023. YOLOv8 continues the design ideas of previous generations of YOLO algorithms. Compared with YOLOv5, YOLOv8 has made significant architectural optimizations: the C2f was introduced to replace the C3 module, while the CSP (Cross Stage Partial) module was retained, and a lightweight design was carried out on this basis. In addition, YOLOv8 removes some convolution modules in the design of the neck network, further reducing the number of parameters and improving the calculation efficiency. In addition to architectural improvements, YOLOv8 has also made a series of optimizations to the detection head and loss function to improve detection performance and accuracy.
The network framework of YOLOv8 is made up of three parts, namely, Backbone, Neck, and Head. Backbone uses C2f as the basic module and combines the residual structure to form a feature extraction network. Neck continues the previous PAN-FPN feature pyramid idea and deeply integrates the features extracted by Backbone. Head uses a decoupled head design to decouple the original detection head into two detection heads, which calculate the prediction box and category information, respectively. In view of the uncertainty of the flame and smoke shape in the wildfire smoke detection task, we improved YOLOv8 and proposed a DLCM in Backbone to increase the feature extraction capability of the backbone network. At the same time, the DySample module [43] was introduced in the Neck part to effectively improve the dynamic perception ability of the model during the upsampling process. Finally, during the entire training process, Shape-IoU [44] and EMASlideLoss [45] were utilized to make the whole network pay greater attention to the morphological characteristics of wildfires and smoke.

3.2. DLCM

The core module in YOLOv8 is the C2f module, which improves the C3 module of YOLOv5 to obtain a module with better feature extraction capabilities. Figure 1 shows the C2f module.
In the C2f module, the features are initially processed by a standard convolutional layer to reduce the number of channels by half. The features are then divided by the Split operation, and several Bottleneck modules are adopted to extract features. The different divided blocks are then concatenated, and finally the number of channels is restored by a convolutional layer. In order to further optimize the module structure and improve the feature extraction capability of the module, we proposed the DLCM, which reduces the number of model parameters by combining Dynamic Convolution [46] and the Ghost Module [47] and boosts the dynamic feature extraction capability of the C2f module.

3.2.1. DynamicConv

Due to the constant changes in the shape of fire and smoke detection, the feature extraction ability of static convolution is weakened in this context. Therefore, we used dynamic convolution to extract features of the backbone network. Dynamic convolution has N convolution kernels, and each convolution kernel is given its own unique attention weight. Therefore, compared with static convolution, dynamic convolution has a stronger feature extraction ability. Figure 2 is a schematic diagram of dynamic convolution.
When processing the input features, we first divide them into N blocks of the same size and number of channels. Subsequently, each block is assigned a unique attention weight by introducing an attention weighting mechanism. Finally, these weighted feature blocks are fused through linear summation to form the output of the network. This design enables the weight information in DynamicConv to be adaptively adjusted according to the input features, thus considerably enhancing the flexibility of the model. Since the multiple convolution kernels used by DynamicConv are small in size, the number of parameters of the model will not be significantly increased. Based on this advantage, we chose to replace the conventional convolutional layers in the Bottleneck structure with DynamicConv to achieve the dynamic perception and aggregation of wildfire and smoke image features.

3.2.2. Ghost Module

The feature maps generated by ordinary convolution possess abundant redundant information, which can provide the network model with a comprehensive understanding of the image. However, the cost of generating these redundant feature maps is expensive, so determining how to generate these feature maps through cheaper operations is crucial in the lightweight model. The Ghost Module is capable of generating a large number of feature maps with only a small amount of computation. The structure of the Ghost Module is shown in Figure 3.
The Ghost Module comprises two key steps. The module decreases the number of channels of the input features by half through a small number of convolution operations, a step that significantly reduces the computational complexity. Secondly, the linear transformation of features is achieved through cheap linear operations, especially group convolution, to obtain new feature representations. Compared to standard convolution, the Ghost Module significantly reduces the required computing resources while maintaining performance. In the C2f module, we replaced the traditional Bottleneck structure with the Ghost Module and combined it with DynamicConv. This improvement not only achieved a lightweight model structure but also enhanced the feature extraction capability of the DLCM. With this design, our model was more efficient in capturing and processing image features of wildfires and smoke while remaining computationally efficient.

3.2.3. Dynamic Lightweight Conv Module

The DLCM first replaces the two convolutions in the Ghost Module with dynamic convolutions, which enhances the module’s dynamic feature extraction capability. Secondly, the Bottleneck in C2f is substituted with the Ghost Module, which further decreases the quantity of parameters and computations in the entire module. Figure 4 presents a schematic diagram of the DLCM.
The following (Algorithm 1) is the pseudocode implementation of DLCM:
Algorithm 1: DLCM
1: Feature = Conv(Input)
2: Feature1, Feature2 = Split(Feature)
3: Feature1 = DynamicGhost(Feature1)
4: Feature_fuse = Concat(Feature1, Feature2)
5: Output = Conv(Feature_fuse)
First, the input features undergo ordinary convolution to halve the number of channels to reduce the level of calculation. Then, the input is divided into multiple convolution blocks, one part of which extracts feature information through multiple Ghost Modules and the other part of which performs residual operations on the features after passing through Ghost Modules. Finally, the number of channels is restored through ordinary convolution. DLCM significantly reduces the calculation level and parameter number of the model, and it enhances the accuracy of detecting wildfires and smoke through dynamic convolution.

3.3. DySample

In the Neck part, the features of different scales generated by Backbone are fused and extracted. The core component of Neck is the PAN-FPN feature pyramid, which contains an upsampling operation, which is crucial to the final detection accuracy. In YOLOv8, the UpSample upsampling method requires a very high number of calculation parameters, and the upsampling method uses the nearest neighbor algorithm, which only considers the surrounding pixels to generate feature maps. It is not effective for objects with irregular shapes such as wildfire smoke, so we used DySample to perform upsampling operations. DySample is a dynamic and lightweight upsampling method. Its core idea is to use point sampling to restore image sampling. Firstly, it reduces the level of calculation, and secondly, dynamic sampling enables the model to fully capture the characteristics of flames and smoke and generate more accurate feature maps.
First, grid sampling is performed. Given an input feature map with a size of C × H × W, the input feature map is sampled to C × sH × sW using the bilinear interpolation method. Equation (1) is the grid sampling calculation formula:
X = U p X
X represents the input feature, U p is the linear interpolation method, and X is the interpolated feature map. Then, a linear layer is used to generate the offset O . The calculation formula for the offset is as follows:
O = L i n e a r X
The size of the generated offset O is 2 s2 × H × W. The offset needs to be reshaped to a size of 2 × sH × sW. Then, an original sampling network G is generated, and the reshaped offset is incorporated into the original sampling network to acquire the sampling set S . The calculation formula for the sampling set is as follows:
S = G + O
where S stands for the sampling set, G stands for the sampling network, and O stands for the offset. Finally, the bilinear interpolated feature map X and the sampling set S are resampled to obtain the final upsampled feature map X o u t . The calculation formula is as follows:
X o u t = G r i d S a m p l e X , S

3.4. Loss

In YOLOv8, the loss function is structured in two components: classification loss and boundary loss. The classification loss applies the cross-entropy loss function, while the boundary loss function takes the form of the CIOU loss. Although the cross-entropy loss function performs well in many scenarios, it cannot effectively optimize the model due to the dynamic changes in the detection object in wildfire smoke detection. Additionally, the CIOU boundary regression loss solely considers the geometric divergence between the predicted box and the actual box and does not consider the shape information. Therefore, we improved the loss function, using EMASlideLoss for classification loss and Shape-IoU for regression loss.

3.4.1. EMASlideLoss

EMASlideLoss combines the exponential moving average (EMA) and the sliding window mechanism and is a variant of SlideLoss. The design idea of SlideLoss is to use a sliding window to calculate the loss function. By calculating the loss function in each window, the prediction information of different scales and positions is obtained, which improves the detection ability for small objects. Equation (5) shows the calculation method for EMASlideLoss.
f x = 1 x μ 0.1 e 1 μ μ < x < μ 0.1 e 1 x x u
μ represents the average IoU of all bounding boxes. Samples smaller than μ are negative samples, and samples larger than μ are positive samples. When x μ 0.1 , it means that more severe punishment has been given to difficult-to-distinguish negative samples, allowing the model to learn to distinguish difficult samples. When μ < x < μ 0.1 , the loss function acts as a smoothing function. When x u , the model is encouraged to increase its predictions for correct samples.
EMA smooths the loss by performing the exponential moving average on the window loss of SlideLoss, optimizes the noise, and improves the generalization of the model. Compared with traditional loss functions, EMASlideLoss’s unique smoothing mechanism can adapt to the dynamic changes in smoke and improve detection ability for small-target areas through sliding windows. In wildfire smoke detection, EMASlideLoss can effectively optimize dynamically changing objects such as smoke.

3.4.2. Shape–IoU

In LDENet, we employed the Shape-IoU loss function as a substitute for the CIOU loss function. The Shape-IoU loss function takes into account not just the location of the bounding box but also the shape information of the detected target, making the positioning more accurate. The following is the calculation formula for Shape-IoU:
L s h a p e I o U = 1 I o U + D S + 0.5 × Ω s h a p e
I o U is used to measure the overlap between the predicted box and the real box. D S represents the distance shape, which calculates the horizontal and vertical distances between the predicted bounding box and the centroid of the ground truth bounding box. Ω s h a p e focuses on the difference in width and height between the predicted box and the real box. The calculation formula for IoU is as follows:
I o U = B B g t B B g t
where B stands for the predicted box and B g t represents the ground truth bounding box. The calculation formula for D S is as follows:
D S = h h × ( x c x c g t ) 2 / c 2 + w w × ( y c y c g t ) 2 / c 2
w w serves as the weight coefficient along the horizontal axis, while h h functions as the weight coefficient along the vertical axis, x c and y c represent the coordinates of the center point of the predicted box, x c g t and y c g t are the center points of the real box, and c is the diagonal distance of the minimum bounding box. The calculation formula for Ω s h a p e is as follows:
Ω s h a p e = t = w , h ( 1 e ω t ) θ
ω t represents the difference in width and height between the real box and the predicted box. Wildfire smoke typically has irregular and diverse shapes, and traditional loss functions overlook the complexity of smoke shapes. The Shape-IoU loss function can better guide the model to learn the true shape features of smoke by introducing shape information, making the model more accurate in predicting bounding boxes. In addition, the DS term considers the distance of the center point, which enables the model to better adapt to changes in the position of smoke areas with irregular shapes and varying sizes and improve the accuracy of localization.
We combined the EMASlideLoss and Shape-IoU loss functions to replace the loss function in YOLOv8, enhancing the model’s ability to detect small dynamic targets such as wildfire smoke.

3.5. LDENet

We designed a lightweight dynamic enhancement for YOLOv8 and proposed LDENet, which can be used to detect wildfire smoke in transmission line channels. First, DLCM was designed in the Backbone part. Through dynamic convolution and the lightweight Ghost Module, richer feature information was obtained with a lower number of parameters. Then, DySample was introduced into the Neck part to achieve dynamic upsampling with a very small number of parameters and generate more accurate upsampling feature maps. Finally, by improving the loss function, the model’s detection performance for wildfire smoke targets during training was effectively improved. The final network structure of LDENet is shown in Figure 5.
The following (Algorithm 2) is the pseudocode implementation of LDENet:
Algorithm 2: LDENet
1: Feature = Conv(Input)
2: for i = 1 to 4
3:    Feature = Conv(Feature)
4:    Feature = DLCM(Feature)
5: Feature = SPPF(Feature)
6: Feature = FPN(Feature)
7: Output = Detect_Head(Feature)
The Backbone part consists of multiple convolutions and DLCMs, and there is an SPPF module at the bottom to enhance the final feature information. The Neck part fuses the different levels of features of Backbone and fuses the bottom features with the high-level features through the feature pyramid network. The fused features contain detailed information and semantic information, which can more comprehensively represent the image features. The DySample module is used in the upsampling operation. Finally, the Head part outputs the classification information and boundary information through the decoupling head. The classification loss is implemented by EMASlideLoss, and the boundary regression loss is implemented by the Shape-IoU loss function.

4. Results and Discussion

4.1. Experimental Dataset and Environment

The experimental datasets used in this paper are composed of Flame [48], M4SFWD [49], and Wildfire Smoke Dataset. The Flame dataset was proposed by Northern Arizona University. It uses drones to shoot in the pine forests of Arizona and captures aerial images of flames in all directions. The Wildfire Smoke Dataset was jointly released by Mankind and HPWREN. It uses weather stations deployed in the wild to capture images of smoke generated by flames. Since the weather stations are located at a high altitude, the captured images are from a high-altitude perspective. Both datasets are from an aerial bird’s eye view, which can effectively simulate the perspective of wildfire smoke detection in transmission line channels. Since the real data collection site is often limited to one climate and weather condition, it is impossible to collect data in different weather conditions. Therefore, M4SFWD uses Unreal Engine technology to simulate and generate wildfire smoke in the wild environment. This dataset provides wildfire smoke datasets in different weather, terrain, and lighting conditions, which greatly supplement the richness of the dataset samples. Finally, after data cleaning, images from the three datasets were selected to generate a wildfire smoke detection dataset. The dataset encompasses 3007 wildfire smoke images, which are partitioned into the training set, the validation set, and the test set in a proportion of 7:2:1. The total number of detected targets is 9888, including 5844 wildfires and 4044 smoke. Figure 6 is a schematic diagram of some images in the dataset.
In all the experiments presented in this paper, Ubuntu 20.04 is adopted as the operating system, the GPU utilized is RTX4090, and the deep learning framework employed is Pytorch. The specific experimental parameters are as follows: the learning rate is 0.001, Epoch is set to 300, and Batchsize is set to 48.

4.2. Experimental Evaluation Indicators

In this experiment, our evaluation indicators included precision, recall, F1 value, and mAP50. The evaluation indicators of lightweight degree included model parameters and FLOPs. Precision indicates the ratio of true-positive samples to the total number of samples predicted to be positive. In this context, TP denotes positive samples that are accurately recognized, TN denotes negative samples that are accurately recognized, FP denotes negative samples that are misclassified, and FN denotes positive samples that are misclassified. The calculation formula for Precision is as follows:
P r e c i s i o n = T P T P + F P
The recall rate represents the ratio of accurately identified positive samples to the total number of positive samples. Its calculation formula is as follows:
Recall = TP TP + FN
The F1 value represents the harmonic mean between precision and recall, and is a balanced indicator. Its calculation formula is as follows:
F 1 = 2 T P 2 T P + F P + F N
The mean average precision (mAP) is defined as the average of all average precision (AP) values across all categories. It serves to mirror the overall precision of the model. The average precision (AP), which is the area under the precision–recall (PR) curve, gauges the accuracy of the model’s prediction for a single category. The mean average precision at 50% intersection over union (mAP50) is the mean AP computed by mAP when the intersection over union (IoU) is set to 0.5. The mAP50 value has a positive correlation with the model’s detection ability.
The model parameter count is the size of the entire detection model parameter count, which determines the memory usage during inference. FLOPs are the number of floating-point operations of the model, which can measure the complexity of the detection model and infer the detection speed.

4.3. Ablation Experiment

For the purpose of verifying the effectiveness of the approach presented in this paper, we conducted an ablation experiment and set up different control group experiments to verify the advantages and disadvantages of the model. A total of seven different experiments were set up. The first group was the YOLOv8s basic network. The next three groups added DLCMs and DySample and improved loss on the basis of YOLOv8s to verify whether each module could improve the detection accuracy. The last three groups were a combination of three improvement points to further verify the effectiveness of the module. The last one was LDENet. The experimental results are shown in Table 1.
As shown in Table 1, the YOLOv8s basic network achieved the lowest detection effect, and the detection effect for dynamic small targets such as wildfires and smoke was not good. After improving Backbone, mAP50 increased by 0.7%, and the number of parameters was reduced from 11.12 M to 7.79 M, proving that the DLCM could reduce the number of parameters and enhance recognition accuracy for dynamic targets. DySample only increased the number of parameters by 3%; additionally, the F1 value increased by 0.4%, and the mAP50 increased to 85.3%. After improving the loss function, both the F1 and mAP values were significantly improved. Finally, DLCM, DySample, loss, and YOLOv8s were combined to create LDENet, which achieved the highest accuracy, with an F1 value of 83.5%, an mAP50 of 86.6%, and a parameter reduction of 29.7%. The experimental results show that LDENet improved the detection ability of wildfires and smoke while reducing the number of parameters. DLCM and DySample enhanced the dynamic detection ability of the model. The EMASlideLoss and ShapeIoU loss functions expedited the model’s optimization rate, and, as a result, enhanced the detection precision of LDENet.

4.4. Upsampling Parameter Experiments

In the DySample dynamic upsampling operator, the group parameter is used to group convolutions, reduce the number of model parameters, and improve the model’s computational efficiency. Therefore, different groups affect the final sampling effect. Therefore, experiments are conducted on the group parameter. In addition, during the upsampling process, there are two methods. The first method is LP, which performs linear transformation first and then pixel shuffling. The second method is PL, which performs pixel shuffling first and linear transformation later. In this section, we also experimented with the two methods, and the experimental results are shown in Figure 7.
From Figure 7, we can see that the effect of using the LP method is better than that achieved with the PL method. The PL method shuffles pixels first, destroying the original feature structure, resulting in a poor effect. The best effect is achieved using four groups for the group parameter. Too many or too few groups will not improve the sampling effect well. Therefore, in all the experiments in this paper, the LP method was used, and the group parameter was set to four groups.

4.5. Comparison with Other Algorithms

To assess the efficacy of LDENet, it is contrasted with other target detection approaches. These include classic algorithms within the YOLO series, as well as methods like RT-DETR. The experimental outcomes are presented in Table 2.
As shown in Table 2, YOLOv5s and YOLOv9s have fewer parameters, but their detection accuracy is not high. As the latest target detection algorithm, YOLO11s has not surpassed LDENet in accuracy, and the model has more parameters and more computation than LDENet. RT-DETR [50] is an object detection algorithm based on the transformer, and its parameters and computation are significantly higher than those of the YOLO series of algorithms. However, due to the difficulty in training the self-attention mechanism, the F1 value and mAP50 of the model are significantly lower than those of other algorithms. Finally, LDENet has the best performance in terms of detection accuracy and the model being lightweight. A comparison of the detection effects of typical algorithms is presented in Figure 8.
As shown in Figure 8, in the case of the complex smoke scene shown in the images, LDENet can detect most of the smoke. DLCM and DySample add dynamic detection capabilities to the model, making it easy to deal with complex smoke scenes. However, YOLOv8 misses most of the smoke, and YOLO11 performs slightly better than YOLOv8.
As shown in Figure 9, against complex backgrounds, neither YOLOv8 nor YOLO11 detected the flame, and only LDENet completely identified the flame under the smoke, proving that LDENet can identify small targets well, which is crucial in the early detection of wildfires and smoke. Figure 10 shows a heat map comparison between YOLOv8 and LDENet.
In the heat map, blue represents the background, while the darker yellow represents the model’s attention to this area. As can be seen from the above Figure, the YOLOv8 model cannot pay attention to the location of the smoke and always pays attention in the wrong places. LDENet with the DLCM effectively solves the problem of YOLOv8 not being able to pay attention to the dynamic target of the smoke. The focus of the network model is basically the location of the smoke in the image.

5. Conclusions

We proposed a dynamic and lightweight enhanced target detection network for detecting wildfires and smoke in transmission line channels. To address the issue of dynamic alterations in wildfire and smoke morphology, we designed the DLCM, which used dynamic convolution to enhance the model’s ability to extract features of wildfire smoke morphology while ensuring a light weight. In addition, the upsampling operator of the neck network was replaced, and DySample was used to further enhance the dynamic perception ability of the model. Finally, the loss function was improved based on the morphological characteristics of wildfire smoke, using the EMASlideLoss classification loss, which is better at small-target detection, and the ShapeIoU boundary regression loss, which considers the target morphological characteristics. Compared with YOLOv8, the final detection accuracy of LDENet is 86.6%, and the model parameters are reduced by 29.7% compared with the basic network.
When flames and smoke obstruct each other, LDENet cannot comprehensively detect smoke, so the next key research direction is to address this type of obstruction problem. Since the environment in which power transmission lines are located varies greatly, there is a lack of real datasets in different environments. The next step is to collect more wildfire smoke images in different backgrounds to expand our dataset. In addition, we will continue to explore new lightweight detection methods to optimize performance and efficiency in wildfire and smoke detection.

Author Contributions

Conceptualization, Y.Z. and Y.D.; methodology, Y.Z., Y.J. and L.Z.; software, Y.Z.; validation, Y.D. and Q.L.; formal analysis, Y.Z., Y.J. and G.Z.; resources, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Technology Project “Development of Key Technologies for Resonance-Based Ice Removal Robots” (24CXY0923).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Z.; Zhang, Y.; Wu, H.; Suzuki, S.; Namiki, A.; Wang, W. Design and Application of a UAV Autonomous Inspection System for High-Voltage Power Transmission Lines. Remote Sens. 2023, 15, 865. [Google Scholar] [CrossRef]
  2. Luo, Y.; Yu, X.; Yang, D.; Zhou, B. A survey of intelligent transmission line inspection based on unmanned aerial vehicle. Artif. Intell. Rev. 2023, 56, 173–201. [Google Scholar] [CrossRef]
  3. Jiaqing, Z.; Yu, H.; Xin, Q.; Tai, Z. A review on fire research of electric power grids of China: State-of-the-art and new insights. Fire Technol. 2024, 60, 1027–1076. [Google Scholar] [CrossRef]
  4. Wu, H.; Wang, J.; Nan, D.; Cui, Q.; Ou, J. Transmission line fault cause identification method based on transient waveform image and MCNN-LSTM. Measurement 2023, 220, 113422. [Google Scholar] [CrossRef]
  5. Bhamra, J.K.; Anantha Ramaprasad, S.; Baldota, S.; Luna, S.; Zen, E.; Ramachandra, R.; Kim, H.; Schmidt, C.; Arends, C.; Block, J.; et al. Multimodal Wildland Fire Smoke Detection. Remote Sens. 2023, 15, 2790. [Google Scholar] [CrossRef]
  6. Zheng, Y.; Zhang, G.; Tan, S.; Yang, Z.; Wen, D.; Xiao, H. A forest fire smoke detection model combining convolutional neural network and vision transformer. Front. For. Glob. Chang. 2023, 6, 1136969. [Google Scholar] [CrossRef]
  7. Khan, R.A.; Hussain, A.; Bajwa, U.I.; Raza, R.H.; Anwar, M.W. Fire and smoke detection using capsule network. Fire Technol. 2023, 59, 581–594. [Google Scholar] [CrossRef]
  8. Sun, Y.; Feng, J. Fire and smoke precise detection method based on the attention mechanism and anchor-free mechanism. Complex Intell. Syst. 2023, 9, 5185–5198. [Google Scholar] [CrossRef]
  9. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  10. Li, R.; Hu, Y.; Li, L.; Guan, R.; Yang, R.; Zhan, J.; Cai, W. SMWE-GFPNNet: A high-precision and robust method for forest fire smoke detection. Knowl. -Based Syst. 2024, 289, 111528. [Google Scholar] [CrossRef]
  11. Sun, X.; Sun, L.; Huang, Y. Forest fire smoke recognition based on convolutional neural network. J. For. Res. 2021, 32, 1921–1927. [Google Scholar] [CrossRef]
  12. Chen, X.; An, Q.; Yu, K.; Ban, Y. A novel fire identification algorithm based on improved color segmentation and enhanced feature data. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  13. Buriboev, A.S.; Rakhmanov, K.; Soqiyev, T.; Choi, A.J. Improving Fire Detection Accuracy through Enhanced Convolutional Neural Networks and Contour Techniques. Sensors 2024, 24, 5184. [Google Scholar] [CrossRef]
  14. Zhao, Z.; Cui, G.; Li, D. Video-based smoke detection by using motion, color, and texture features//Third International Symposium on Computer Engineering and Intelligent Communications (ISCEIC 2022). SPIE 2023, 12462, 164–171. [Google Scholar]
  15. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intell. Robot. Syst. 2019, 93, 337–349. [Google Scholar] [CrossRef]
  16. Sudhakar, S.; Vijayakumar, V.; Kumar, C.S.; Proya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [Google Scholar] [CrossRef]
  17. Dalal, S.; Lilhore, U.K.; Radulescu, M.; Simaiya, S.; Jaglan, V.; Sharma, A. A hybrid LBP-CNN with YOLO-v5-based fire and smoke detection model in various environmental conditions for environmental sustainability in smart city. Environ. Sci. Pollut. Res. 2024, 1–18. [Google Scholar] [CrossRef] [PubMed]
  18. Heikkilä, M.; Pietikäinen, M.; Schmid, C. Description of interest regions with local binary patterns. Pattern Recognit. 2009, 42, 425–436. [Google Scholar] [CrossRef]
  19. Alamgir, N.; Nguyen, K.; Chandran, V.; Boles, W. Combining multi-channel color space with local binary co-occurrence feature descriptors for accurate smoke detection from surveillance videos. Fire Saf. J. 2018, 102, 1–10. [Google Scholar] [CrossRef]
  20. Hearst, M.A.; Dumais, S.T.; Osuna, E. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  21. Gong, X.; Hu, H.; Wu, Z.; He, L.; Yang, L.; Li, F. Dark-channel based attention and classifier retraining for smoke detection in foggy environments. Digit. Signal Process. 2022, 123, 103454. [Google Scholar] [CrossRef]
  22. Khan, Z.A.; Hussain, T.; Ullah, F.U.M.; Gupta, S.K.; Lee, M.Y.; Baik, S.W. Randomly initialized CNN with densely connected stacked autoencoder for efficient fire detection. Eng. Appl. Artif. Intell. 2022, 116, 105403. [Google Scholar] [CrossRef]
  23. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks//International conference on machine learning. PMLR 2019, 97, 6105–6114. [Google Scholar]
  24. Hu, X.; Jiang, F.; Qin, X.; Huang, S.; Yang, X.; Meng, F. An optimized smoke segmentation method for forest and grassland fire based on the UNet framework. Fire 2024, 7, 68. [Google Scholar] [CrossRef]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation//Medical image computing and computer-assisted intervention. In Proceedings of the MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer International Publishing: New York, NY, USA, 2015; pp. 234–241. [Google Scholar]
  26. Yuan, F.; Wang, G.; Huang, Q.; Li, X. A Newton Interpolation Network for Smoke Semantic Segmentation. Pattern Recognit. 2024, 159, 111119. [Google Scholar] [CrossRef]
  27. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  28. Girshick, R. Fast r-cnn. arXiv 2015, arXiv:1504.08083. [Google Scholar]
  29. Cheknane, M.; Bendouma, T.; Boudouh, S.S. Advancing fire detection: Two-stage deep learning with hybrid feature extraction using faster R-CNN approach. Signal Image Video Process. 2024, 18, 5503–5510. [Google Scholar] [CrossRef]
  30. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, L.; Wang, M.; Ding, Y.; Bu, X. MS-FRCNN: A multi-scale faster RCNN model for small target forest fire detection. Forests 2023, 14, 616. [Google Scholar] [CrossRef]
  32. Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  33. Redmon, J. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  34. Yang, W.; Yang, Z.; Wu, M.; Zhang, G.; Zhu, Y.; Sun, Y. SIMCB-Yolo: An Efficient Multi-Scale Network for Detecting Forest Fire Smoke. Forests 2024, 15, 1137. [Google Scholar] [CrossRef]
  35. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  36. Yuan, N.; Ding, H.; Guo, P.; Wang, G.; Hu, P.; Zhao, H.; Wang, H.; Xu, Q. FS-YOLO: Real-time Fire and Smoke Detection based on Improved Object Detection Algorithms. J. Imaging Sci. Technol. 2024, 68, 030402. [Google Scholar] [CrossRef]
  37. Huang, X.; Xie, W.; Zhang, Q.; Lan, Y.; Heng, H.; Xiong, J. A Lightweight Wildfire Detection Method for Transmission Line Perimeters. Electronics 2024, 13, 3170. [Google Scholar] [CrossRef]
  38. Alkhammash, E.H. A Comparative Analysis of YOLOv9, YOLOv10, YOLOv11 for Smoke and Fire Detection. Fire 2025, 8, 26. [Google Scholar] [CrossRef]
  39. Mamadaliev, D.; Touko, P.L.M.; Kim, J.-H.; Kim, S.-C. ESFD-YOLOv8n: Early Smoke and Fire Detection Method Based on an Improved YOLOv8n Model. Fire 2024, 7, 303. [Google Scholar] [CrossRef]
  40. Muksimova, S.; Umirzakova, S.; Mardieva, S.; Abdullaev, M.; Cho, Y.I. Revolutionizing Wildfire Detection Through UAV-Driven Fire Monitoring with a Transformer-Based Approach. Fire 2024, 7, 443. [Google Scholar] [CrossRef]
  41. Sun, B.; Cheng, X. Smoke Detection Transformer: An Improved Real-Time Detection Transformer Smoke Detection Model for Early Fire Warning. Fire 2024, 7, 488. [Google Scholar] [CrossRef]
  42. Wang, D.; Qian, Y.; Lu, J.; Wang, P.; Hu, Z.; Chai, Y. Fs-yolo: Fire-smoke detection based on improved YOLOv7. Multimed. Syst. 2024, 30, 215. [Google Scholar] [CrossRef]
  43. Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to upsample by learning to sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 6027–6037. [Google Scholar]
  44. Zhang, H.; Zhang, S. Shape-iou: More accurate metric considering bounding box shape and scale. arXiv 2023, arXiv:2312.17663. [Google Scholar]
  45. Yu, Z.; Huang, H.; Chen, W.; Su, Y.; Liu, Y.; Wang, X. Yolo-facev2: A scale and occlusion aware face detector. Pattern Recognit. 2024, 155, 110714. [Google Scholar] [CrossRef]
  46. Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic convolution: Attention over convolution kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11030–11039. [Google Scholar]
  47. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
  48. Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fule, P.Z.; Blasch, E. Aerial imagery pile burn detection using deep learning: The FLAME dataset. Comput. Netw. 2021, 193, 108001. [Google Scholar] [CrossRef]
  49. Wang, G.; Li, H.; Li, P.; Lang, X.; Feng, Y.; Ding, Z.; Xie, S. M4SFWD: A Multi-Faceted synthetic dataset for remote sensing forest wildfires detection. Expert Syst. Appl. 2024, 248, 123489. [Google Scholar] [CrossRef]
  50. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
Figure 1. Schematic diagram of the C2f module.
Figure 1. Schematic diagram of the C2f module.
Processes 13 00349 g001
Figure 2. Schematic diagram of dynamic convolution.
Figure 2. Schematic diagram of dynamic convolution.
Processes 13 00349 g002
Figure 3. Schematic diagram of the Ghost Module.
Figure 3. Schematic diagram of the Ghost Module.
Processes 13 00349 g003
Figure 4. Schematic diagram of the DLCM.
Figure 4. Schematic diagram of the DLCM.
Processes 13 00349 g004
Figure 5. Diagram of the LDENet.
Figure 5. Diagram of the LDENet.
Processes 13 00349 g005
Figure 6. Experimental dataset.
Figure 6. Experimental dataset.
Processes 13 00349 g006
Figure 7. Experimental results for upsampling parameters.
Figure 7. Experimental results for upsampling parameters.
Processes 13 00349 g007
Figure 8. Comparison of typical algorithms. (a) Original image; (b) YOLOv8; (c) YOLO11; (d) LDENet.
Figure 8. Comparison of typical algorithms. (a) Original image; (b) YOLOv8; (c) YOLO11; (d) LDENet.
Processes 13 00349 g008
Figure 9. Comparison of typical algorithms. (a) Original image; (b) YOLOv8; (c) YOLO11; (d) LDENet.
Figure 9. Comparison of typical algorithms. (a) Original image; (b) YOLOv8; (c) YOLO11; (d) LDENet.
Processes 13 00349 g009
Figure 10. Comparison of heatmaps. (a) Original image; (b) YOLOv8; (c) LDENet.
Figure 10. Comparison of heatmaps. (a) Original image; (b) YOLOv8; (c) LDENet.
Processes 13 00349 g010
Table 1. Ablation experiment results.
Table 1. Ablation experiment results.
YOLOv8sDLCMDySampleLossP/%R/%F1/%mAP50/%Parameters/MFLOPs/G
84.979.581.985.111.1228.4
85.180.882.985.87.7919.0
84.580.182.385.311.5028.4
86.779.583.085.411.1228.5
86.979.983.386.07.8219.1
85.681.283.286.111.5028.5
86.380.483.386.17.7919.0
85.981.283.57.8286.619.1
Table 2. Comparison of results with other algorithms.
Table 2. Comparison of results with other algorithms.
MethodP/%R/%F1/%mAP50/%Parameters/MFLOPs/G
YOLOv5s81.980.981.485.07.8116.3
YOLOv8s84.582.183.385.211.1228.4
YOLOv9s86.880.982.885.67.1626.7
YOLO10s82.480.381.483.88.0424.4
YOLO11s86.880.682.785.79.4121.3
RT-Detr81.576.679.282.231.98103.4
LDENet85.981.283.586.67.8219.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Jiao, Y.; Dou, Y.; Zhao, L.; Liu, Q.; Zuo, G. A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels. Processes 2025, 13, 349. https://doi.org/10.3390/pr13020349

AMA Style

Zhang Y, Jiao Y, Dou Y, Zhao L, Liu Q, Zuo G. A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels. Processes. 2025; 13(2):349. https://doi.org/10.3390/pr13020349

Chicago/Turabian Style

Zhang, Yu, Yangyang Jiao, Yinke Dou, Liangliang Zhao, Qiang Liu, and Guangyu Zuo. 2025. "A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels" Processes 13, no. 2: 349. https://doi.org/10.3390/pr13020349

APA Style

Zhang, Y., Jiao, Y., Dou, Y., Zhao, L., Liu, Q., & Zuo, G. (2025). A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels. Processes, 13(2), 349. https://doi.org/10.3390/pr13020349

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop