Next Article in Journal
GNSS Anti-Interference Technologies for Unmanned Systems: A Brief Review
Previous Article in Journal
Advancing Early Wildfire Detection: Integration of Vision Language Models with Unmanned Aerial Vehicle Remote Sensing for Enhanced Situational Awareness
Previous Article in Special Issue
High-Altitude-UAV-Relayed Satellite D2D Communications for 6G IoT Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy

1
College of Air Traffic Management, Civil Aviation University of China, Tianjin 300300, China
2
Office of Cybersecurity and Information Technology, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(5), 348; https://doi.org/10.3390/drones9050348
Submission received: 8 February 2025 / Revised: 10 March 2025 / Accepted: 19 March 2025 / Published: 3 May 2025

Abstract

This study aims to improve the accuracy of fire source detection, the efficiency of path planning, and the precision of firefighting operations in drone swarms during fire emergencies. It proposes an intelligent firefighting technology for drone swarms based on multi-sensor integrated path planning. The technology integrates the You Only Look Once version 8 (YOLOv8) algorithm and its optimization strategies to enhance real-time fire source detection capabilities. Additionally, this study employs multi-sensor data fusion and swarm cooperative path-planning techniques to optimize the deployment of firefighting materials and flight paths, thereby improving firefighting efficiency and precision. First, a deformable convolution module is introduced into the backbone network of YOLOv8 to enable the detection network to flexibly adjust its receptive field when processing targets, thereby enhancing fire source detection accuracy. Second, an attention mechanism is incorporated into the neck portion of YOLOv8, which focuses on fire source feature regions, significantly reducing interference from background noise and further improving recognition accuracy in complex environments. Finally, a new High Intersection over Union (HIoU) loss function is proposed to address the challenge of computing localization and classification loss for targets. This function dynamically adjusts the weight of various loss components during training, achieving more precise fire source localization and classification. In terms of path planning, this study integrates data from visual sensors, infrared sensors, and LiDAR sensors and adopts the Information Acquisition Optimizer (IAO) and the Catch Fish Optimization Algorithm (CFOA) to plan paths and optimize coordinated flight for drone swarms. By dynamically adjusting path planning and deployment locations, the drone swarm can reach fire sources in the shortest possible time and carry out precise firefighting operations. Experimental results demonstrate that this study significantly improves fire source detection accuracy and firefighting efficiency by optimizing the YOLOv8 algorithm, path-planning algorithms, and cooperative flight strategies. The optimized YOLOv8 achieved a fire source detection accuracy of 94.6% for small fires, with a false detection rate reduced to 5.4%. The wind speed compensation strategy effectively mitigated the impact of wind on the accuracy of material deployment. This study not only enhances the firefighting efficiency of drone swarms but also enables rapid response in complex fire scenarios, offering broad application prospects, particularly for urban firefighting and forest fire disaster rescue.

1. Introduction

With the accelerated urbanization of modern society, the frequency of fire incidents has been increasing year by year, particularly in large-scale building complexes, forests, and hard-to-reach disaster areas. The limitations of traditional firefighting methods have become increasingly evident [1,2]. In responding to sudden fires, quickly and accurately locating fire sources and taking effective firefighting measures are crucial for reducing casualties, property losses, and environmental damage [3]. However, traditional manual firefighting and fixed equipment-based methods often fail to meet the demands for efficiency, precision, and real-time response in complex and dynamic fire scenarios. Identifying and locating fire sources become particularly difficult, especially in environments characterized by high temperatures, dense smoke, obstacles, and challenging terrain, negatively impacting firefighting efficiency and precision [4,5,6]. Consequently, there is an urgent need for a more flexible, efficient, and adaptable emergency fire response solution.
In recent years, drone swarm technology has achieved significant progress across various fields, particularly in disaster relief and environmental monitoring. Drone swarms offer advantages such as high flexibility, rapid response, and extensive coverage. Through collaborative operations, they can quickly form a synergistic force to complete tasks [7]. Applying drone swarms to fire emergency responses not only overcomes the limitations of traditional firefighting methods but also leverages drones’ aerial advantages to rapidly acquire real-time fire scene data and perform efficient firefighting operations [8,9]. The potential of drone swarms in firefighting, particularly in fire source identification, path planning, and precision deployment of firefighting materials, has become a research hotspot [10]. However, existing drone swarm firefighting systems still face numerous technical challenges, particularly in fire source identification accuracy, path planning efficiency, and material deployment precision [11]. Addressing these challenges through innovative technologies to improve the performance of drone swarms in fire emergency response is a critical problem that needs to be solved.
This study aims to enhance fire source detection accuracy, path planning efficiency, and firefighting precision in drone swarms by proposing an intelligent firefighting technology based on multi-sensor integrated path planning. To meet this challenge, the You Only Look Once version 8 (YOLOv8) algorithm is improved to optimize fire source detection capabilities, enabling drones to quickly and accurately identify fire sources in complex environments. These improvements focus on three key aspects: 1. A deformable convolution module is introduced into YOLOv8’s backbone network, allowing the model to flexibly adjust its receptive field based on the morphology of different targets, thereby effectively enhancing fire source detection accuracy. 2. An attention mechanism is added to YOLOv8’s neck section. This mechanism allows the model to focus on the feature regions of fire sources while processing complex environments, significantly reducing interference from background noise and improving fire source recognition rates. 3. A new High Intersection over Union (HIoU) loss function is proposed. This loss function dynamically adjusts the weights of various loss components during training, resulting in more accurate fire source localization and classification. These improvements significantly enhance the YOLOv8 algorithm’s fire source detection accuracy in complex fire scenarios, laying a solid foundation for subsequent firefighting operations. In terms of path planning, this study integrates data from multiple sensors (visual, infrared, and LiDAR) and combines the Information Acquisition Optimizer (IAO) and Catch Fish Optimization Algorithm (CFOA) to optimize the cooperative path planning and flight of drone swarms. By fusing multi-sensor data, the system can obtain more comprehensive and accurate information in complex and dynamic fire environments in real-time, optimizing drone flight paths and improving path planning efficiency. This multi-sensor fusion path planning technology not only effectively avoids obstacles and reduces collision risks but also enables cooperative flight of drone swarms in challenging terrains, enhancing firefighting efficiency and precision. With the application of optimization algorithms, real-time dynamic optimization is achieved, ensuring that drones can reach fire source areas via the shortest and most efficient flight paths, thereby enabling precise deployment of firefighting materials.

2. Literature Review

In recent years, the application of drone swarms in fire emergency response has gradually become a research hotspot, particularly in areas such as fire source identification, path planning, and firefighting precision. Numerous scholars have proposed various solutions using different technological approaches. In the field of fire source identification, the You Only Look Once (YOLO) series of algorithms has been widely applied in drone image processing systems due to the algorithms’ efficient object detection capabilities. In recent years, with the optimization of versions such as You Only Look Once version 4 (YOLOv4) and You Only Look Once version 5 (YOLOv5), many researchers have implemented targeted improvements to enhance fire source identification accuracy in fire scenarios [12,13]. Zheng et al. utilized the YOLOv4 algorithm to analyze fire images and proposed an optimization strategy combining Convolutional Neural Network (CNN) and residual modules, significantly improving the accuracy of fire source detection [14]. However, this method still has room for improvement in detection accuracy under complex backgrounds, particularly in effectively identifying fire sources in high-temperature smoke and challenging scenes.
To address the interference of background noise, Lv et al. introduced an attention mechanism into YOLOv5, leveraging self-attention modules to enhance the extraction of fire source features [15]. This method improves detection robustness by strengthening feature responses in fire source regions and reducing the influence of background information on model predictions. Although this approach performs well in simple fire scenarios, it still faces risks of missed and incorrect detections in complex and variable fire environments. To tackle these challenges, Narkhede et al. proposed a multimodal sensor fusion method for fire source identification, combining visual sensors with infrared sensors to effectively address the challenges of identifying fire sources under varying lighting and weather conditions [16]. Experimental results showed significant improvement in identification performance during nighttime and in low-visibility environments. However, the complexity of sensor hardware and the real-time processing of data remain challenges for future research.
In the field of path planning, scholars have proposed various optimization algorithms to enhance the efficiency and stability of drone swarm path planning. Saeed et al. employed the Genetic Algorithm (GA) to optimize the paths of drone swarms, significantly improving the efficiency of multi-drone task execution [17]. Their experiment achieved promising results in certain static scenarios by conducting global searches for drone swarm paths. However, due to the highly dynamic nature of fire scenes, such as wind speed changes and uncertain obstacles, traditional GA face limitations in handling these dynamic factors. To address this, Zhu et al. proposed a deep reinforcement learning-based path-planning method for drone swarms [18]. By introducing an agent–environment interaction mechanism, this method can optimize flight paths in real-time within fire scene environments, enhancing the adaptability and flexibility of drone swarms. However, the computational time required for training reinforcement learning models remains lengthy, and for more complex environments, real-time performance and computational efficiency still fall short of practical requirements. Additionally, Lin et al. studied the application of multi-sensor data fusion technology in drone swarms and proposed a particle filter-based path-planning method [19]. By integrating data from visual, infrared, and LiDAR sensors, this method accurately acquires drone positions and fire source information in dynamic environments and adjusts flight paths in real-time. While this study successfully validated the importance of multi-sensor fusion in fire scenarios, challenges related to computational complexity and coordination among large-scale drone swarms remain significant in collaborative operations.
Regarding firefighting strategies, researchers have explored the precision of material delivery by drone swarms during fire suppression operations. Wang et al. discussed the use of blockchain technology for risk prediction and credibility assessment in online public opinion networks. The core concept involved risk evaluation and decision-making under conditions of incomplete or unreliable information [20]. Mugnai et al. proposed a fire suppression material delivery strategy based on drone path planning. By combining fire source location and wind speed data, they determined the optimal timing and location for material delivery using optimization algorithms [21]. However, this method still faces challenges such as delivery inaccuracies and resource waste when confronted with complex wind variations and terrain factors. To address these issues, Li et al. investigated a multi-objective optimization-based strategy for the precise delivery of firefighting materials. By integrating this strategy with drone flight path planning, they achieved improvements in firefighting efficiency [22]. While this approach demonstrated preliminary success, it remains dependent on the accuracy of sensor data and real-time processing capabilities, presenting challenges in data fusion and computational resource demands.
Despite offering technological support for drone swarm applications in fire emergency responses, challenges remain in terms of fire source identification accuracy, path planning efficiency, and firefighting precision. The precision material delivery strategy proposed in this study, combined with dynamic path planning and flight optimization, effectively reduces resource wastage and improves firefighting accuracy. These innovations address the limitations of existing technologies in complex fire scenarios, providing a more efficient and precise intelligent firefighting solution using drone swarms. For example, Shahid et al. proposed a fire dynamics recognition model based on the Transformer architecture, optimizing feature extraction efficiency in smoke-obstructed scenarios through a cross-scale attention mechanism [23]. Their performance enhancement strategy for small object detection provided theoretical guidance for incorporating deformable convolutions in this study. Venturini et al. developed a reinforcement learning-based path-planning framework that employed a distributed Q-learning algorithm to enable cooperative obstacle avoidance for drone swarms in dynamic environments [24]. Its environment-adaptive mechanism offers a methodological contrast to the group collaboration optimization in the CFOA algorithm proposed in this study, highlighting the complementarity of different technical approaches. Soderlund et al. designed a multimodal sensor fusion system for forest fire scenarios, integrating Kalman filtering and Bayesian networks to achieve real-time alignment and reliability assessment of infrared and LiDAR data [25]. Their heterogeneous data fusion paradigm provides practical engineering support for designing the multi-sensor IAO in this study. Collectively, these research efforts indicate a global trend toward algorithmic diversification and scenario-specific refinement in unmanned aerial vehicle (UAV)-based fire emergency technologies. This study proposes a YOLOv8-based improvement and a path-planning strategy that incorporates common insights from international research. These include dynamic feature extraction and swarm intelligence optimization. Additionally, this study introduces localized innovations to address the unique challenges of high-density urban fires and complex mountainous forest fires in Asia. This results in a technical framework that balances universality with regional adaptability.

3. Research Methodology

3.1. Fire Source Identification Method

YOLOv8 is a CNN-based object detection model that employs a single-stage detection approach, enabling simultaneous object detection and localization on input images [26,27,28]. Unlike traditional detection methods, YOLOv8 models the task as a regression problem. It divides the image into multiple grids and performs object classification and position regression within each grid, achieving efficient and accurate detection [29,30]. The network structure of YOLOv8 is illustrated in Figure 1.
The YOLOv8 network structure consists of three main components: the Backbone, the Neck, and the Head. The Backbone is responsible for extracting features from the input image, the Neck integrates multi-scale information, and the Head performs final target classification and position regression. The Spatial Pyramid Pooling Feature (SPPF) module is designed for object detection, enabling the extraction of information at various resolutions from the feature map [31,32]. The classification loss calculation for YOLOv8 is shown in Equation (1):
V F L ( p , q ) = q ( q l o g p + 1 q log 1 p , q > 0 α p r log 1 p , q = 0
In Equation (1), p represents the predicted class score, q [ 0,1 ] . q denotes the predicted object score. For negative samples (i.e., no target present), the loss contribution is reduced through weight adjustments. Parameters r and α control the impact of negative samples on the loss, minimizing interference from these samples.
To enhance YOLOv8’s performance in fire source identification, three critical improvements were introduced in this study: incorporation of the deformable convolution module, integration of the attention mechanism in the Neck, and development of an efficient IoU loss function. Traditional convolution operations use a fixed receptive field, which may lead to performance degradation in scenes with complex backgrounds or objects of varying scales. To address this issue, this study incorporates deformable convolutions into the backbone network of YOLOv8, enabling the detection network to dynamically adjust its receptive field when processing objects. Specifically, deformable convolutions introduce spatial offsets in convolution operations, allowing the network to adaptively focus on target regions. This improves fire source detection accuracy, particularly when dealing with targets of inconsistent scales or significant occlusion, thereby enhancing overall detection performance. Additionally, the Neck section of YOLOv8 is enhanced with the Convolutional Block Attention Module (CBAM), which integrates channel attention and spatial attention submodules. The channel attention mechanism automatically adjusts the importance of different channels, ensuring that the feature extraction process prioritizes the most relevant feature regions. Meanwhile, the spatial attention mechanism strengthens the network’s focus on fire source features while suppressing background noise interference. In fire detection scenarios, factors such as smoke and cluttered backgrounds often reduce detection accuracy. The introduction of attention mechanisms effectively mitigates background noise interference, further improving fire source recognition precision, especially in complex environments. In the original YOLOv8 implementation, the Intersection over Union (IoU) loss function is used to compute localization and classification errors. However, standard IoU loss may result in inaccurate object localization when handling bounding boxes with minimal overlap or targets of varying shapes. To address this limitation, this study proposes a high-IoU loss function, which dynamically adjusts the weight of different loss components and incorporates target center-point error calculations. This enhances fire source localization accuracy. By considering differences in target center points and regressing aspect ratios, the high-IoU loss function improves object localization precision, particularly for small or irregularly shaped fire sources, yielding more accurate regression results. These three enhancements—deformable convolutions, attention mechanisms, and the high-IoU loss function—work synergistically to improve YOLOv8’s performance in fire source detection. Specifically, deformable convolutions enhance model flexibility and adaptability, attention mechanisms help the model focus more effectively on fire source regions, and the high-IoU loss function optimizes localization accuracy. The calculation for the deformable convolution is presented in Equation (2):
y ( p 0 ) = p n R w p n · x ( p 0 + p n + p n )
Here, p n | n = 1,2 , , N . y ( p 0 ) represents the value of the output feature map at position p 0 , and x ( p 0 + p n + p n ) represents the feature value of the input image at the adjusted position p 0 + p n + p n . p n is the displacement introduced by the deformable convolution, which is used to adjust the receptive field. w p n corresponds to the weight of the convolution kernel. R is the receptive field range, which defines the region of the convolution operation on the input image.
The attention mechanism employs a mixed-domain approach. The CBAM consists of channel attention and spatial attention submodules.
M c F = σ M L P A v g P o o l F + M L P M a x P o o l F = σ ( W 1 W 0 F a v g c + W 1 ( W 0 F m a x c )
In Equation (3), F is the input feature map, F a v g c and F m a x c are the average and maximum pooling feature maps across the c -dimension, M c is the channel attention map, W 0 and W 1 are linear transformation matrices, and σ represents the activation function [33].
The calculation of the spatial attention module is shown in Equation (4):
M s F = σ f 7 × 7 ( [ A v g P o o l F ; M a x P o o l F ] ) = σ ( f 7 × 7 F a v g s ; F m a x s )
In Equation (4), F a v g c and F m a x c are the spatial average and maximum pooling feature maps, f 7 × 7 represents a 7 × 7 convolution layer, and M s is the spatial attention map [34].
IoU measures the ratio of the intersection area to the union area between the predicted and ground truth bounding boxes [35]. The IoU calculation is shown in Equation (5):
L I O U = 1 I o U = 1 W i H i S u
W i and H i are the width and height of the overlapping region, and S u is the area of the union between the predicted and ground truth boxes. When W i = 0 or H i = 0 , W i cannot be updated. To address this limitation, this study introduced the HIoU loss function, as shown in Equation (6):
L H I O U = R W I o u × L I o u + 1 2 ( x x g t 2 + y y g t 2 W g 2 + H g 2 + α v )
In Equation (6), L I o u represents the IoU loss. R W I o u is a dynamic adjustment coefficient controlling the penalty for IoU loss. x x g t 2 + y y g t 2 measures the center point difference between the predicted and ground truth boxes. W g 2 + H g 2 standardizes the center point error. α is a hyperparameter balancing center point and aspect ratio regression weights. v is the aspect ratio regression loss term [36,37]. These improvements optimize YOLOv8’s fire source identification capability. The updated YOLOv8 network structure is shown in Figure 2.
In Figure 2, the Convolutional Blocks with Strides (CBS) module in YOLOv8 represents an efficient convolutional structure designed to balance computational cost and information utilization in the self-attention mechanism.

3.2. Multi-Sensor Data Fusion

The application of UAV swarms in fire emergency responses—particularly in fire source identification, path planning, and firefighting strategy execution—is heavily influenced by various environmental factors such as lighting, smoke, wind speed, and terrain. In such dynamic and complex environments, data from a single sensor (e.g., visual or infrared sensors) may struggle to provide comprehensive and accurate environmental information. Therefore, multi-sensor data fusion techniques, which integrate data from multiple sensors, can overcome the limitations of individual sensors and deliver more thorough and accurate environmental perception capabilities. This study proposes a multi-sensor data fusion method that combines data from visual sensors, infrared sensors, and Light Detection and Ranging (LiDAR) to enhance the navigation and task execution capabilities of UAV swarms in fire scenarios [38,39].
The objective of multi-sensor data fusion is to merge heterogeneous information from different sensors to improve data accuracy and reliability. The framework for multi-sensor data fusion is shown in Figure 3.
To achieve efficient fire source detection and path planning, this study employed multiple sensors for data acquisition. Specifically, the experimental setup included visual sensors, infrared sensors, and LiDAR sensors, each serving distinct purposes in different environmental conditions. Visual sensors primarily capture image data of fire sources, while infrared sensors detect heat sources, providing clearer fire location information, especially in heavy smoke or nighttime conditions. LiDAR sensors are used for precise terrain and obstacle mapping, supplying essential three-dimensional spatial data to support UAV path planning. These sensors were tightly integrated with the UAV control system, enabling real-time data transmission and processing. The acquired data were transmitted via a wireless communication system to a ground control center or flight control computer for further processing.
The data processing workflow incorporated multi-sensor data fusion techniques to integrate information from visual, infrared, and LiDAR sensors. Specifically, fire source detection and localization were performed using image processing techniques, such as the YOLOv8 algorithm, applied to visual and infrared sensor data. Meanwhile, LiDAR data were processed using specialized algorithms for spatial modeling and path planning. To enhance the effectiveness of data fusion, this study introduced an IAO and the Chaotic Fish Optimization Algorithm (CFOA). These algorithms dynamically adjust fusion strategies based on sensor data characteristics. By doing so, they help optimize UAV swarm flight paths.
For fire source detection, the processed data were fed into the YOLOv8 network for fire identification. The optimized YOLOv8 model efficiently detects fire sources from real-time image data and calculates precise fire locations based on target position and size. In terms of path planning, the fused environmental data were input into the IAO and CFOA algorithms. These algorithms worked together to generate the optimal flight path. This ensures that UAVs can quickly reach the fire source and carry out firefighting operations. Although the sensor configuration and data processing methods in this study significantly improve fire source detection and path planning accuracy, each sensor has inherent limitations. For instance, visual sensors are susceptible to performance degradation in heavy smoke or poor lighting conditions, leading to reduced detection accuracy. To mitigate this issue, infrared sensor data were incorporated, ensuring reliable fire detection even when visual sensors underperform. However, infrared sensors have limited capabilities in detecting small or distant fire sources, necessitating the combined use of multiple sensors to compensate for individual shortcomings. LiDAR sensors in this study were primarily used for environmental modeling and obstacle detection. Their high-precision three-dimensional spatial data provide a fundamental basis for path planning. However, a key limitation of LiDAR is its reduced effectiveness in detecting transparent objects (e.g., glass) and low-reflectivity surfaces, which can introduce errors in complex environments. Additionally, LiDAR sensors are costly and relatively heavy, requiring careful evaluation of UAV payload capacity during deployment to ensure system stability and flight endurance.
Information acquisition is a critical step in gathering useful data. To ensure the collection of comprehensive initial information, individuals used various methods to gather data from different sources, forming an initial information system. This process is represented in Equation (7):
x i i t e r + 1 = x i i t e r + v × ( x i r 1 x i r 2 )
In Equation (7), x i i t e r denotes the current position of the i -th individual at the i t e r -th iteration, v is the step size control parameter. x i r 1 and x i r 2 represent reference information gathered from other individuals or sensors. The filtering and evaluation of information in the IAO algorithm are described by Equation (8):
x i i t e r + 1 = x i i t e r × r a n d × x i r a n d x i i t e r ,   i f   r a n d < 0.5 x i i t e r + 1 = x i i t e r + × r a n d × x i r a n d x i i t e r ,   o t h e r w i s e
In Equation (8), is the step adjustment factor, r a n d is a random number controlling the magnitude of adjustments, and x i r a n d represents information obtained from randomly selected individuals. The analysis and organization of information in the IAO algorithm are shown in Equation (9):
x i i t e r + 1 = x i b e s t × cos ( π 2 × 1 3 )   ϕ   ×   ε × ( 1 D i = 1 d x i b e s t x i b e s t ) ,   i f   ϕ 0.5 x i i t e r + 1 = x i b e s t × cos ( π 2 × 1 3 )   0.8   ×   ( 1 ϕ ) × [ ς × K × 1 D i = 1 d x i b e s t 2 × w 1 × x i b e s t ] ,   o t h e r w i s e
In Equation (9), x i b e s t is optimal solution in the current iteration. is metric related to the information. ε is adjustment factor. D is data dimension. ς and K are constants influencing the degree of adjustment. The pseudo-code for the IAO algorithm is summarized in Algorithm 1.
Algorithm 1: IAO Algorithm Pseudo-Code
1. Initialize the flight tasks and the positions of individuals ( x i ) . Set the maximum number of iterations ( T m a x ) and the learning rate ( v ).
2. For each individual, gather information and update the position: x i i t e r + 1 = x i i t e r + v × ( x i r 1 x i r 2 )
3. Evaluate and filter the collected information. Adjust the position based on a random condition:
i f r a n d < 0.5 , then x i i t e r + 1 = x i i t e r × r a n d × x i r a n d x i i t e r
4. Perform information analysis and organization, adjusting positions using x i b e s t
i f   ϕ 0.5 , then x i i t e r + 1 = x i b e s t × c o s ( π 2 × 1 3 ) ϕ × ε × ( 1 D i = 1 d x i b e s t x i b e s t )
5. Once the iteration is complete, output the optimal path and terminate the process.
The CFOA draws inspiration from the predatory behavior of fish in nature, simulating their foresight and strategic methods of finding opportunities in complex environments. In the context of UAV swarm path planning, CFOA models the cooperative behavior of UAVs and dynamically adjusts interactions among individuals to optimize flight paths. The initialization phase of CFOA is defined by Equation (10):
F i s h e r i , j = u b j l b j r + l b j
In Equation (10), F i s h e r i , j represents the position of the i-th individual in the j-th dimension, u b j and l b j are the upper and lower bounds of the j-th dimension, and r is a randomly generated value ensuring diversity in the initial positions. The exploration phase was modeled using the capture rate parameter α , as defined in Equation (11):
α = ( 1 3 × E F s 2 × M a x E F s ) 3 × E F s 2 × M a x E F s
In Equation (11), E F s is the fitness of the current individual and M a x E F s is the fitness of the best individual within the group. The independent search phase of CFOA is described by Equations (12)–(14):
E x p = f i t i f i t p f i t m a x f i t m i n
R = D i s × E x p × ( 1 E F s M a x E F s )
F i s h e r i , j T + 1 = F i s h e r i , j T + F i s h e r p o s , j T F i s h e r i , j T × E x p + r s × s × R
f i t i and f i t p represent the fitness values of the current individual and its parent individual, respectively. f i t m a x and f i t m i n are the maximum and minimum fitness values in the population. D i s is the distance parameter, which represents the distance between the current position and the target position. r s is a random factor, and s is an adjustable parameter that controls the search speed. During the group cooperation phase of CFOA, the group center was calculated, and individual positions were adjusted to promote cooperation among the group members. The expressions are given in Equations (15) and (16).
C e n t r e c = m e a n ( F i s h e r c T )
F i s h e r c , i , j T + 1 = F i s h e r c , i , j T + r 2 × C e n t r e c F i s h e r c , i , j T + r 3 × ( 1 2 × E F s M a x E F s ) 2
In the development phase, individuals further refined their positions to achieve global optimization, as represented in Equations (17) and (18):
σ = ( 2 ( 1 E F s M a x E F s ) ( E F s M a x E F s 2 + 1 )
F i s h e r i T + 1 = G b e s t + G D ( 0 , r 4 × σ × m e a n F i s h e r G b e s t 3 )
G b e s t represents the global best solution, and G D is a Gaussian distribution function guiding convergence toward the optimal solution. The pseudo-code for the CFOA algorithm is provided in Algorithm 2.
Algorithm 2: CFOA Algorithm Pseudo-Code
1. Initialize the fish swarm position: Randomly distribute F i s h e r i , j in the search space. Set the maximum number of iterations T m a x and the capture rate α .
2. Exploration phase: Update the capture rate α using the formula:
α = ( 1 3 × E F s 2 × M a x E F s ) 3 × E F s 2 × M a x E F s
3. Fitness calculation: Compute the fitness of each individual and update the fish swarm position using Equation (12).
4. Independent search phase: Perform the independent search phase using the formula in Equation (13).
5. Group fishing phase: Calculate the centroid of the fish swarm C e n t r e c = m e a n ( F i s h e r c T ) , and update the positions.
6. Development phase: Update the position of each individual: F i s h e r i T + 1 = G b e s t + G D ( 0 , r 4 × σ × m e a n F i s h e r G b e s t 3 )
7. Termination: If the maximum number of iterations is reached, output the optimal path and terminate.
This study enhanced the implementation details of intelligent UAV-based firefighting technology in system architecture design. The system consisted of three core modules. The Fire Detection Module: Based on an improved YOLOv8 model, this module integrated deformable convolution and a hybrid-domain attention mechanism. Deformable convolution was embedded in the 3rd, 6th, and 9th layers of the backbone network to enable dynamic receptive field adjustment. Additionally, the CBAM was integrated at the top of the feature pyramid network (FPN). The cascaded channel-spatial attention submodules enabled feature recalibration, enhancing fire detection accuracy. Multi-Sensor Data Fusion Module: This module established a three-dimensional data synchronization mechanism for RGB cameras, infrared thermal imaging sensors, and LiDAR. A timestamp alignment strategy ensured millisecond-level data synchronization. Furthermore, the IAO dynamically filtered sensor data and assigned sensor weight matrices to optimize information fusion. Path Planning Module: The CFOA implemented a three-stage optimization mechanism: exploration, independent search, and group collaboration. A fire scene energy field model was constructed on a three-dimensional point cloud map. By dynamically adjusting the α parameter, this module compensated for trajectory deviations caused by wind disturbances. These three modules interacted through the Robot Operating System (ROS) middleware, forming a closed-loop control process of “perception–decision–execution.” The specific data flow followed this sequence: Raw sensor data → Feature fusion layer → Fire source localization → Swarm path planner → UAV motion controller.

3.3. Experimental Data

A hybrid dataset comprising two main components was used to validate algorithm performance. 1. The Public Dataset: Fire images from the FireNet dataset, covering day/night and indoor/outdoor scenes. To enhance robustness, 15% of the samples were augmented by synthetically adding smoke. 2. The Self-Built UAV Thermal Imaging Dataset: Collected using a DJI Matrice 300 RTK equipped with a Zenmuse H20T sensor. This dataset included 120 video sequences (1920 × 1080 resolution at 30 fps) across six types of environments: urban buildings, forests, chemical plants, and more. Frame-level annotations were performed following the ISO 7240-8 standard [40]. Multi-scale random cropping (ranging from 256 × 256 to 1280 × 1280) and adaptive histogram equalization were applied. For smoke occlusion scenarios, Beta distribution-based random erasing was used with a probability of 0.3 and an erasure ratio of 20–50%. The dataset was split into training (70%), validation (20%), and testing (10%) sets. Cross-validation by three firefighting experts confirmed an annotation accuracy of 98.2%. All dataset resources have been open-sourced.

4. Results Analysis

4.1. Fire Source Detection Accuracy Evaluation

To assess the performance of the optimized YOLOv8 algorithm in fire source detection, multiple simulated fire scenarios were used for accuracy testing. Figure 4 presents a comparison of the algorithm’s accuracy before and after optimization under different fire source types. It is evident that the optimized YOLOv8 algorithm achieved a significant improvement in fire source detection accuracy, particularly in recognizing small fire sources and multiple fire points, with a noticeable decrease in false detection rates. This indicates that the optimized algorithm is better able to adapt to complex fire environments.
Wind speed is one of the key factors influencing the accuracy of fire suppression material deployment. To assess the impact of wind speed on deployment effectiveness, Figure 5 shows the changes in material deployment accuracy under different wind speed conditions. The results in Figure 5 indicated that wind speed had a significant impact on deployment accuracy. As wind speed increased, the deployment error gradually increased. To address this, this study introduced a wind speed compensation strategy that effectively reduced the impact of wind speed on deployment accuracy.

4.2. Path Planning Efficiency Evaluation

This study compared the optimized path-planning algorithm with the traditional A* algorithm to evaluate its flight path efficiency in multi-drone collaboration scenarios. Figure 6 presents a comparison of the flight time and path length between the two path-planning methods in different fire environments.
Figure 6 shows that the optimized algorithm outperformed the traditional A* algorithm in both flight time and path length. This was particularly evident in complex forest fires and industrial area fire environments, where the optimized path planning significantly improved both the efficiency of the path and the responsiveness of the drone swarm.
To assess the impact of path planning and collaborative flight strategies on fire suppression efficiency, this experiment compared multiple strategies. The fire suppression efficiency under different path planning and collaborative flight strategies is shown in Figure 7. The results demonstrated that the optimization strategy combining IAO and CFOA significantly improved fire suppression accuracy and task completion, reducing suppression time and flight path length.

4.3. Fire Suppression Material Deployment Accuracy

To verify the effectiveness of the precise fire suppression material deployment strategy, experiments tested the deployment accuracy under different strategies. Figure 8 shows the deployment errors for traditional deployment methods and the method proposed in this study under different fire sources. Figure 8 illustrates that the material deployment strategy proposed in this study achieved significantly lower deployment errors for all fire source types compared to traditional methods. The deployment accuracy was improved by approximately 10% to 15%, verifying the effectiveness of the proposed strategy.
To comprehensively assess the system’s performance in complex fire environments, this experiment simulated multiple fire scenarios for testing. Figure 9 presents the overall evaluation results for different solutions in complex fire environments. The results indicated that in complex fire scenarios, the optimized path planning combined with the IAO and CFOA algorithms for drone swarms significantly improved fire suppression efficiency and task completion speed. Additionally, it demonstrated excellent performance in response time and flight stability.
The comparison of detection accuracy and false detection rates for different fire source types is presented in Table 1. The results indicated that the optimized model significantly improved detection accuracy and effectively reduced false detection rates across small, medium, and large fire sources, as well as multi-fire scenarios. For small fire sources, accuracy increased from 82.3% to 94.6%, while the false detection rate decreased by 12.7%. For medium fire sources, accuracy improved from 88.1% to 96.2%, with a 9.5% reduction in the false detection rate. For large fire sources, accuracy rose from 91.4% to 97.8%, accompanied by a 5.2% drop in the false detection rate. In multi-fire detection, accuracy increased from 76.5% to 92.1%, with a substantial 15.6% reduction in the false detection rate. These results demonstrated that the proposed optimization approach offers significant advantages in enhancing fire detection accuracy and robustness. Notably, the improved strategy exhibits superior adaptability and precision in handling complex multi-fire scenarios. This ensures rapid response and precise operation for UAV swarms in wildfire emergency missions.

5. Conclusions

The proposed improvements to YOLOv8, particularly the introduction of the deformable convolution module, attention mechanism, and HIoU loss function, have theoretically optimized the accuracy of object detection. However, their core advantage is demonstrated in the practical application of fire source recognition. In complex fire environments, especially when smoke is dense, or fire sources are small, the optimized YOLOv8 model performed better. It can dynamically adjust the receptive field and focus more on the fire source area. This reduces interference from background noise and improves fire source detection accuracy. In practice, this enables UAV swarms to identify fire sources more quickly and accurately, minimizing false alarms and missed detections, thus ensuring precise execution of firefighting tasks. Experimental results showed that in real fire scenarios, the optimized YOLOv8 model significantly improved detection accuracy. It performed especially well with small fires and multiple fire points compared to the unoptimized version. This enhancement helps UAV swarms respond quickly. Next, optimization of path planning and collaborative flight: The path planning section introduced the IAO and CFOA algorithms to optimize flight trajectories. This process, based on optimization algorithms and swarm intelligence models, enables optimal path planning in dynamic fire environments. In practical applications, the IAO algorithm continuously adjusted data collection and information-sharing strategies, helping UAV swarms find the shortest and safest flight path in complex fire environments. Meanwhile, the CFOA algorithm, by simulating the behavior of fish swarms, optimized cooperation among UAVs and reduced redundant paths during the flight. When theoretical optimization algorithms were applied in practice, they effectively reduced flight time and path length for UAV swarms during firefighting operations, enhancing system efficiency. In real-world scenarios of multi-UAV collaboration, this path-planning method quickly guided the UAV swarm to the fire source and deployed firefighting materials precisely, avoiding resource waste and excessive flight. Further, the practical application of multi-sensor data fusion: Multi-sensor data fusion technology was used in fire source detection and path planning, integrating data from visual, infrared, and LiDAR sensors to improve the UAV swarm’s perception in fire environments. Theoretical data fusion algorithms, such as IAO and CFOA, processed the differences in sensor data to provide more accurate and comprehensive environmental perception in complex conditions. In practice, the combination of visual and infrared sensors allowed UAVs to maintain high fire detection accuracy even in cases of smoke obstruction or insufficient lighting. LiDAR’s three-dimensional spatial data helped UAV swarms avoid obstacles accurately, ensuring flight safety. Through data fusion, the entire system sensed environmental changes in real-time and responded accordingly, enabling more precise firefighting actions. Experiments tested the system’s performance in various fire scenarios, including different wind speeds, fire source types, and complex environmental factors. The results showed that the optimized YOLOv8, IAO, and CFOA algorithms significantly improved fire source detection accuracy and path planning efficiency. Especially in large-scale fires or complex forest fire environments, UAV swarms quickly responded and accurately reached fire locations to complete firefighting tasks. However, some limitations exist in practical applications, such as sensor performance fluctuations under extreme weather conditions and computational delays in the data fusion process. These issues need further optimization and resolution in future research. The mathematical methods proposed in this study have achieved significant performance improvements in practical applications. From fire source detection to path planning and multi-sensor data fusion, the introduction of each technology directly enhances the execution efficiency and precision of the UAV swarm firefighting system. This study combines theoretical methods with practical applications. It provides a feasible technical solution for UAV swarm deployment in fire emergency responses. The effectiveness of the solution has been proven through experiments in real-world fire scenarios.
In practical applications, sensors are a key factor influencing the performance of UAV swarm firefighting systems. Specifically, visual sensors may experience a significant decrease in detection performance in environments with heavy smoke or insufficient lighting, leading to inaccurate fire source localization. Although this limitation was mitigated by the introduction of infrared sensors, infrared sensors have limited capability in detecting distant fire sources and may be affected by variations in fire source heat radiation and environmental temperature. Additionally, LiDAR also has limitations when detecting transparent objects (such as glass) and certain low-reflectivity objects. To address these issues, future work could explore the use of multimodal sensors, such as millimeter-wave radar, to further enhance the system’s robustness. Millimeter-wave radar is unaffected by lighting and smoke and can operate stably in various complex environments, thereby improving overall perception capability. Second, although this study employed multi-sensor data fusion technology, combining data from visual, infrared, and LiDAR sensors to effectively enhance environmental perception, the data fusion process still faced challenges related to computational overhead and latency. This is especially problematic when dealing with large-scale UAV swarm operations, where real-time, efficient processing and fusion of large amounts of data from different sensors are a significant challenge. To address this, further optimization of the data fusion algorithms could be considered, such as adopting hierarchical fusion or cloud-based distributed processing solutions. Hierarchical fusion can process sensor data at different levels, reducing the computational burden, while cloud computing can leverage high-performance servers for large-scale data processing, enhancing real-time performance. Third, in terms of path planning, although this study optimized the UAV swarm flight trajectories using IAO and CFOA algorithms with good results, path planning still presents difficulties in complex fire environments. For example, during multi-UAV collaborative flights, avoiding collisions between UAVs while ensuring the optimality of the paths is a complex optimization problem. To address this challenge, reinforcement learning methods could be introduced to achieve autonomous decision-making and collaborative flight for the UAV swarm. By training UAVs to continuously learn how to avoid obstacles and collisions in a simulated environment, the safety and efficiency of swarm flight could be further improved. Fourth, wind speed is an important factor affecting the accuracy of material deployment by UAV swarms during firefighting operations. Although this study proposed a wind speed compensation strategy, the strategy still has limitations, especially under extreme wind conditions, where material deployment errors may be significant. To further improve this issue, dynamic adjustment strategies for material deployment could be considered, incorporating real-time feedback on wind speed and flight trajectories. By introducing wind speed sensors and real-time monitoring technology, the UAVs’ flight altitude and deployment angles could be dynamically adjusted during material delivery, thereby increasing the accuracy of material deployment. Finally, with the continuous development of UAV technology, the scale of UAV swarms may increase significantly in the future. In such cases, effectively managing and coordinating the collaborative flight of a large number of UAVs to ensure the optimization of task allocation and resource utilization for each UAV becomes an important issue. To address this, a distributed control architecture could be employed to support the collaborative operation of large-scale UAV swarms. By introducing distributed computing and collaborative decision-making mechanisms, the system’s scalability and robustness could be enhanced while reducing the risks associated with single points of failure.
Despite the breakthroughs achieved in fire source detection and path planning, practical applications still face challenges in multiple areas, including sensor performance, data fusion efficiency, path planning complexity, and environmental impacts. To address these issues, future research could focus on optimizing sensor configurations, improving data fusion algorithms, introducing novel path-planning methods, refining wind speed compensation strategies, and enhancing the system’s scalability. These measures will further improve the efficiency and accuracy of UAV swarms in fire emergency responses, providing more reliable technical support for the application of UAV swarms in complex fire scenarios.

Author Contributions

Conceptualization, B.Y.; methodology, B.Y., S.Y., Y.Z. and J.W.; software, S.Y. and R.L.; validation, J.L. and B.Z.; formal analysis, B.Y. and S.Y.; investigation, J.L. and B.Z.; re-sources, Y.Z., J.W. and R.L.; data curation, S.Y.; writing—original draft preparation, B.Y. and S.Y.; writing—review and editing, B.Y. and S.Y.; visualization, S.Y.; supervision, B.Y., Y.Z. and J.W.; project administration, B.Y.; funding acquisition, B.Y. and R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundamental Research Funds for the Central Universities for Civil Aviation University of China, grant number 3122020051 and Fundamental Research Funds for the Central Universities for Civil Aviation University of China, grant number 3122023035.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.-I. An improved wildfire smoke detection based on YOLOv8 and UAV images. Sensors 2023, 23, 8374. [Google Scholar] [CrossRef] [PubMed]
  2. Titu, M.F.S.; Pavel, M.A.; Michael GK, O.; Babar, H.; Aman, U.; Khan, R. Real-Time Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing. Drones 2024, 8, 483. [Google Scholar] [CrossRef]
  3. Luan, T.; Zhou, S.; Liu, L.; Pan, W. Tiny-Object Detection Based on Optimized YOLO-CSQ for Accurate Drone Detection in Wildfire Scenarios. Drones 2024, 8, 454. [Google Scholar] [CrossRef]
  4. Shamta, I.; Demir, B.E. Development of a deep learning-based surveillance system for forest fire detection and monitoring using UAV. PLoS ONE 2024, 19, e0299058. [Google Scholar] [CrossRef]
  5. Han, Y.; Duan, B.; Guan, R.; Yang, G.; Zhen, Z. LUFFD-YOLO: A Lightweight Model for UAV Remote Sensing Forest Fire Detection Based on Attention Mechanism and Multi-Level Feature Fusion. Remote Sens. 2024, 16, 2177. [Google Scholar] [CrossRef]
  6. Zheng, Y.; Tao, F.; Gao, Z.; Li, J. FGYOLO: An Integrated Feature Enhancement Lightweight Unmanned Aerial Vehicle Forest Fire Detection Framework Based on YOLOv8n. Forests 2024, 15, 1823. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Tan, L.; Robert TL, K. An improved fire and smoke detection method based on YOLOv8n for smart factories. Sensors 2024, 24, 4786. [Google Scholar] [CrossRef]
  8. Choutri, K.; Lagha, M.; Meshoul, S.; Batouche, M.; Bouzidi, F.; Charef, W. Fire detection and geo-localization using uav’s aerial images and yolo-based models. Appl. Sci. 2023, 13, 11548. [Google Scholar] [CrossRef]
  9. Zhou, J.; Li, Y.; Yin, P. A wildfire smoke detection based on improved YOLOv8. Int. J. Inf. Commun. Technol. 2024, 25, 52–67. [Google Scholar] [CrossRef]
  10. Yandouzi, M.; Berrahal, M.; Grari, M.; Boukabous, M.; Moussaoui, O.; Azizi, M.; Ghoumid, K.; Elmiad, A.K. Semantic segmentation and thermal imaging for forest fires detection and monitoring by drones. Bull. Electr. Eng. Inform. 2024, 13, 2784–2796. [Google Scholar] [CrossRef]
  11. Talaat, F.M.; ZainEldin, H. An improved fire detection approach based on YOLO-v8 for smart cities. Neural Comput. Appl. 2023, 35, 20939–20954. [Google Scholar]
  12. Alsamurai, M.Q.F.; Çevik, Ü.M. Detection of Animals and humans in forest fires using Yolov8. J. Electr. Syst. 2024, 20, 831–843. [Google Scholar]
  13. Yang, Z.; Shao, Y.; Wei, Y.; Li, J. Precision-Boosted Forest Fire Target Detection via Enhanced YOLOv8 Model. Appl. Sci. 2024, 14, 2413. [Google Scholar] [CrossRef]
  14. Zheng, H.; Duan, J.; Dong, Y.; Liu, Y. Real-time fire detection algorithms running on small embedded devices based on MobileNetV3 and YOLOv4. Fire Ecol. 2023, 19, 31. [Google Scholar] [CrossRef]
  15. Lv, C.; Zhou, H.; Chen, Y.; Fan, D.; Di, F. A lightweight fire detection algorithm for small targets based on YOLOv5s. Sci. Rep. 2024, 14, 14104. [Google Scholar] [CrossRef]
  16. Narkhede, P.; Walambe, R.; Mandaokar, S.; Chandel, P.; Kotecha, K.; Ghinea, G. Gas detection and identification using multimodal artificial intelligence based sensor fusion. Appl. Syst. Innov. 2021, 4, 3. [Google Scholar] [CrossRef]
  17. Saeed, R.A.; Omri, M.; Abdel-Khalek, S.; Ali, E.S.; Alotaibi, M.F. Optimal path planning for drones based on swarm intelligence algorithm. Neural Comput. Appl. 2022, 34, 10133–10155. [Google Scholar] [CrossRef]
  18. Zhu, B.; Bedeer, E.; Nguyen, H.H.; Barton, R.; Henry, J. UAV trajectory planning in wireless sensor networks for energy consumption minimization by deep reinforcement learning. IEEE Trans. Veh. Technol. 2021, 70, 9540–9554. [Google Scholar]
  19. Lin, C.; Han, G.; Qi, X.; Du, J.; Xu, T.; Martinez-Garcia, M. Energy-optimal data collection for unmanned aerial vehicle-aided industrial wireless sensor network-based agricultural monitoring system: A clustering compressed sampling approach. IEEE Trans. Ind. Inform. 2020, 17, 4411–4420. [Google Scholar] [CrossRef]
  20. Wang, Z.; Zhang, S.; Zhao, Y.; Chen, C.; Dong, X. Risk prediction and credibility detection of network public opinion using blockchain technology. Technol. Forecast. Soc. Change 2023, 187, 122177. [Google Scholar] [CrossRef]
  21. Mugnai, M.; Losè, M.T.; Satler, M.; Avizzano, C.A. Towards Autonomous Firefighting UAVs: Online Planners for Obstacle Avoidance and Payload Delivery. J. Intell. Robot. Syst. 2024, 110, 10. [Google Scholar]
  22. Li, J.; Dai, Y.; Jiang, R.; Li, J. Objective multi-criteria decision-making for optimal firefighter protective clothing size selection. Int. J. Occup. Saf. Ergon. 2024, 30, 968–976. [Google Scholar] [PubMed]
  23. Shahid, M.; Chen, S.F.; Hsu, Y.L.; Chen, Y.Y.; Chen, Y.L.; Hua, K.L. Forest fire segmentation via temporal transformer from aerial images. Forests 2023, 14, 563. [Google Scholar] [CrossRef]
  24. Venturini, F.; Mason, F.; Pase, F.; Chiariotti, F.; Testolin, A.; Zanella, A.; Zorzi, M. Distributed reinforcement learning for flexible and efficient UAV swarm control. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 955–969. [Google Scholar]
  25. Soderlund, A.; Kumar, M. Estimating the spread of wildland fires via evidence-based information fusion. IEEE Trans. Control Syst. Technol. 2022, 31, 511–526. [Google Scholar]
  26. Yunusov, N.; Islam, B.M.S.; Abdusalomov, A.; Kim, W. Robust Forest Fire Detection Method for Surveillance Systems Based on You Only Look Once Version 8 and Transfer Learning Approaches. Processes 2024, 12, 1039. [Google Scholar] [CrossRef]
  27. Yun, B.; Zheng, Y.; Lin, Z.; Li, T. FFYOLO: A Lightweight Forest Fire Detection Model Based on YOLOv8. Fire 2024, 7, 93. [Google Scholar] [CrossRef]
  28. Zhao, Y.; Luo, L. Aircraft Target Detection on Airport Surface Based on Improved YOLOX. Comput. Simul. 2024, 41, 57–62. [Google Scholar]
  29. Catargiu, C.; Cleju, N.; Ciocoiu, I.B. A Comparative Performance Evaluation of YOLO-Type Detectors on a New Open Fire and Smoke Dataset. Sensors 2024, 24, 5597. [Google Scholar] [CrossRef]
  30. Li, F.; Yan, H.; Shi, L. Multi-scale coupled attention for visual object detection. Sci. Rep. 2024, 14, 11191. [Google Scholar]
  31. Rahman, S.; Rony, J.H.; Uddin, J.; Samad, A. Real-Time Obstacle Detection with YOLOv8 in a WSN Using UAV Aerial Photography. J. Imaging 2023, 9, 216. [Google Scholar] [CrossRef] [PubMed]
  32. Aibin, M.; Li, Y.; Sharma, R.; Ling, J.; Ye, J.; Lu, J.; Zhang, J.; Coria, L.; Huang, X.; Yang, Z.; et al. Advancing Forest Fire Risk Evaluation: An Integrated Framework for Visualizing Area-Specific Forest Fire Risks Using UAV Imagery, Object Detection and Color Mapping Techniques. Drones 2024, 8, 39. [Google Scholar] [CrossRef]
  33. Gonçalves LA, O.; Ghali, R.; Akhloufi, M.A. YOLO-Based Models for Smoke and Wildfire Detection in Ground and Aerial Images. Fire 2024, 7, 140. [Google Scholar] [CrossRef]
  34. Lei, L.; Duan, R.; Yang, F.; Xu, L. Low Complexity Forest Fire Detection Based on Improved YOLOv8 Network. Forests 2024, 15, 1652. [Google Scholar] [CrossRef]
  35. Wang, Y.; Piao, Y.; Wang, H.; Zhang, H.; Li, B. An Improved Forest Smoke Detection Model Based on YOLOv8. Forests 2024, 15, 409. [Google Scholar] [CrossRef]
  36. Ghali, R.; Akhloufi, M.A.; Mseddi, W.S. Deep learning and transformer approaches for UAV-based wildfire detection and segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef]
  37. Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A novel dataset and deep transfer learning benchmark for forest fire detection. Mob. Inf. Syst. 2022, 2022, 5358359. [Google Scholar]
  38. Kim, S.Y.; Muminov, A. Forest fire smoke detection based on deep learning approaches and unmanned aerial vehicle images. Sensors 2023, 23, 5702. [Google Scholar] [CrossRef]
  39. Özel, B.; Alam, M.S.; Khan, M.U. Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning. Information 2024, 15, 538. [Google Scholar] [CrossRef]
  40. ISO 7240-8:2014; Fire Detection and Alarm Systems—Part 8: Point-Type Fire Detectors Using a Carbon Monoxide Sensor in Combination with a Heat Sensor. ISO: Geneva, Switzerland, 2014.
Figure 1. YOLOv8 network structure.
Figure 1. YOLOv8 network structure.
Drones 09 00348 g001
Figure 2. Optimized YOLOv8 network structure.
Figure 2. Optimized YOLOv8 network structure.
Drones 09 00348 g002
Figure 3. Multi-sensor data fusion framework.
Figure 3. Multi-sensor data fusion framework.
Drones 09 00348 g003
Figure 4. Fire source detection accuracy comparison.
Figure 4. Fire source detection accuracy comparison.
Drones 09 00348 g004
Figure 5. Impact of wind speed on fire suppression material deployment accuracy.
Figure 5. Impact of wind speed on fire suppression material deployment accuracy.
Drones 09 00348 g005
Figure 6. Path planning efficiency comparison.
Figure 6. Path planning efficiency comparison.
Drones 09 00348 g006
Figure 7. Fire support efficiency under different path planning and collaborative flight strategies.
Figure 7. Fire support efficiency under different path planning and collaborative flight strategies.
Drones 09 00348 g007
Figure 8. Comparison of fire suppression material deployment accuracy.
Figure 8. Comparison of fire suppression material deployment accuracy.
Drones 09 00348 g008
Figure 9. The comprehensive evaluation of different schemes in complex fire environments.
Figure 9. The comprehensive evaluation of different schemes in complex fire environments.
Drones 09 00348 g009
Table 1. Comparison of detection accuracy and false detection rates for different fire source types.
Table 1. Comparison of detection accuracy and false detection rates for different fire source types.
Fire Source TypeCriteriaYOLOv8 AccuracyOptimized AccuracyFalse Detection Rate Decrease
Small Fire SourceFlame area < 1 m2, Heat radiation < 50 kW/m2 82.3%82.3%94.6%12.7%
Medium Fire Source1 m2 ≤ area < 5 m2, 50–200 kW/m288.1%96.2%9.5%
Large Fire Sourcearea ≥ 5 m2, Heat radiation ≥ 200 kW/m291.4%97.8%5.2%
Multi-fire Source3 independent fire points present simultaneously76.5%92.1%15.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, B.; Yu, S.; Zhao, Y.; Wang, J.; Lai, R.; Lv, J.; Zhou, B. Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy. Drones 2025, 9, 348. https://doi.org/10.3390/drones9050348

AMA Style

Yu B, Yu S, Zhao Y, Wang J, Lai R, Lv J, Zhou B. Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy. Drones. 2025; 9(5):348. https://doi.org/10.3390/drones9050348

Chicago/Turabian Style

Yu, Bingxin, Shengze Yu, Yuandi Zhao, Jin Wang, Ran Lai, Jisong Lv, and Botao Zhou. 2025. "Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy" Drones 9, no. 5: 348. https://doi.org/10.3390/drones9050348

APA Style

Yu, B., Yu, S., Zhao, Y., Wang, J., Lai, R., Lv, J., & Zhou, B. (2025). Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy. Drones, 9(5), 348. https://doi.org/10.3390/drones9050348

Article Metrics

Back to TopTop