Next Article in Journal
Assessing EMF Exposure in Greek Urban and Suburban Areas During 5G Deployment: A Focus on 5G EMF Levels and Distance Correlation
Previous Article in Journal
The Impact of Shaft Power Extraction on Small Turbofan Engines: A Thermodynamic and Exergy-Based Analysis for No-Bleed Architectures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Drone Defense System for Autonomous Detection and Mitigation of Balloon-Borne Threats

Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(8), 1553; https://doi.org/10.3390/electronics14081553
Submission received: 26 February 2025 / Revised: 30 March 2025 / Accepted: 6 April 2025 / Published: 11 April 2025

Abstract

:
In recent years, balloon-borne threats carrying hazardous or explosive materials have emerged as a novel form of asymmetric terrorism, posing serious challenges to public safety. In response to this evolving threat, this study presents an AI-driven autonomous drone defense system capable of real-time detection, tracking, and neutralization of airborne hazards. The proposed framework integrates state-of-the-art deep learning models, including YOLO (You Only Look Once) for fast and accurate object detection, and convolutional neural networks (CNNs) for X-ray image analysis, enabling precise identification of hazardous payloads. This multi-stage system ensures safe interception and retrieval while minimizing the risk of secondary damage from debris dispersion. Moreover, a robust data collection and storage architecture supports continuous model improvement, ensuring scalability and adaptability for future counter-terrorism operations. As balloon-based threats represent a new and unconventional security risk, this research offers a practical and deployable solution. Beyond immediate applicability, the system also provides a foundational platform for the development of next-generation autonomous security infrastructures in both civilian and defense contexts.

1. Introduction

The advancement of artificial intelligence (AI) and drone technologies in modern society has led to significant transformations across public, military, and civilian sectors [1]. Specifically, drones have been utilized in various applications, including aerial surveillance, exploration, and logistics. The widespread adoption of these technologies has established them as essential tools for public safety and defense. Consequently, the integration of drones into security operations has increased, along with a growing demand for advanced AI-driven technologies that enhance their capabilities in real-time threat detection and mitigation.
Recently, balloon-borne security threats have emerged as a growing global concern. Since May 2024, numerous cases of balloons carrying hazardous materials including paper debris, cigarette butts, human waste, and even explosive devices have been reported. These balloons pose significant environmental and security risks, particularly in densely populated urban areas. Although mid-air interception of such threats is technically feasible, the unintended dispersion of hazardous debris increases the risk of secondary damage. Consequently, many governments, including South Korea, have opted for post-landing retrieval strategies to mitigate these risks. However, in scenarios involving high-risk materials such as toxic chemicals, biological agents, or radioactive substances, post-landing retrieval itself can pose severe safety hazards, making real-time interception and controlled retrieval necessary.
Traditional aerial threat detection methods, such as radar systems, ground-based optical sensors, and surveillance networks, are known for their high precision and effectiveness against conventional targets like aircraft or drones. However, these systems are typically cost-intensive, infrastructure-dependent, and optimized for high-speed, metallic threats. Balloon-borne threats, by contrast, are characterized by low speeds, low altitudes, minimal radar cross-sections, and deployment in large quantities at minimal cost. This discrepancy renders high-cost detection systems strategically inefficient for addressing such asymmetric, low-tech threats.
Moreover, the unpredictable nature of balloon deployments—which can occur sporadically across broad and diverse geographic regions—necessitates not only detection but also immediate action, including safe retrieval or neutralization. Traditional systems are ill-suited for this dual-role requirement. In response, the proposed drone-based system offers a scalable and cost-effective solution capable of autonomous detection, tracking, capture, and analysis. By integrating AI-based object recognition and hazard inspection in real time, the system enhances operational agility and readiness, particularly in scenarios where rapid, localized response is critical.
Existing drone defense systems primarily focus on detecting and neutralizing aerial threats; however, there is limited research on technologies that enable effective retrieval and non-destructive threat analysis [2]. Traditional manual response methods also face challenges in addressing rapidly evolving threats in real-time, resulting in delays and potential risks to public safety. Therefore, there is an increasing demand for autonomous AI-based solutions that can provide real-time threat assessment and response [3].
This study proposes an AI-driven automated drone defense system that integrates deep learning models with AI-based non-destructive inspection technologies. The proposed system is designed to detect airborne threats in real time using YOLO (You Only Look Once) object detection, track and intercept these threats before they reach the ground, and safely retrieve them for further analysis using a coordinated swarm of drones. Additionally, X-ray imaging and convolutional neural networks (CNNs) are employed to conduct a rapid, non-invasive threat analysis, enabling the precise identification of hazardous materials. By proactively detecting, tracking, and recovering potential threats, this system minimizes the risks associated with debris dispersion and enhances public safety.
The main contribution of this paper is the development of an AI-based automated drone defense system that addresses the limitations of existing approaches while demonstrating practical applicability in both public security and defense operations [4]. Furthermore, the system is designed to seamlessly integrate with other surveillance and reconnaissance networks, such as CCTV and UAV systems, enhancing situational awareness and security response capabilities. Through this research, we aim to provide a comprehensive solution to emerging threats, such as balloon-based attacks, and establish a scalable, AI-enabled countermeasure framework for global counter-terrorism operations [5].

2. Related Work

The Fourth Industrial Revolution has accelerated the integration of artificial intelligence (AI) into public safety and defense, leading to transformative strategies in operational tactics [6]. Drones have become a strategic asset in surveillance and rapid response operations, as summarized in Table 1. Despite their growing adoption, the field of AI-enabled autonomous drones remains in its early stages, with ongoing research focusing on the integration of advanced computational models and multi-sensor data to enhance autonomy [7].
Regulatory and ethical considerations are central to drone deployment, particularly when surveillance missions intersect with civilian areas. To address potential concerns regarding privacy and accountability, this study advocates for operational protocols that adhere to international standards, thereby maintaining public trust [8].

2.1. Object Detection Using the YOLO Model

The YOLO (You Only Look Once) model is one of the most advanced deep learning-based object detection architectures, designed for real-time applications. Unlike region-based detectors such as Faster R-CNN, which rely on a multi-stage process, YOLO directly predicts object classes and bounding boxes in a single forward pass [9]. This architectural efficiency significantly reduces computational overhead while maintaining high detection accuracy, making it particularly well suited for time-sensitive security and defense applications where immediate threat recognition is paramount [10].
As summarized in Table 2, YOLO offers an optimal trade-off between detection speed and accuracy. While Faster R-CNN delivers superior precision through region proposals, its computational complexity renders it impractical for real-time deployment. SSD and EfficientDet improve detection efficiency across different object scales, yet they often require additional fine-tuning to achieve robust performance in dynamic environments. In contrast, YOLO demonstrates exceptional adaptability across diverse operational conditions, making it the most effective choice for real-time airborne threat detection in this study.
Recent studies have demonstrated the effectiveness of YOLO-based object detection frameworks in various UAV applications, such as wind turbine crack inspection using quadrotors [11] and small object detection from UAV perspectives [12], highlighting their general applicability and robustness in dynamic aerial environments.
In particular, the use of YOLOv8 is essential in this study because balloons present unique detection challenges due to their small size, lightweight structure, and unpredictable trajectories influenced by wind and meteorological conditions. Traditional methods optimized for larger or high-speed objects often fail to reliably identify these slow-moving threats, especially in urban areas with complex visual backgrounds. Furthermore, balloons can be intentionally camouflaged, significantly increasing the risk of undetected hazardous payloads. YOLO’s real-time inference capabilities, robustness in dynamic and uncertain environments, and efficient learning from limited datasets make it uniquely suited to effectively addressing these critical security challenges.

2.2. Collaborative Detection and Tracking Technology for Drones

Conventional single-drone operations often face limitations in coverage area, detection accuracy, and real-time threat response. To overcome these challenges, recent research has shifted toward swarm drone systems, which extend surveillance capabilities by allowing multiple drones to share data and make collective decisions in real time [13]. In this study, a decentralized control algorithm enable each drone to exchange object information autonomously and adapt flight paths according to mission objectives. By distributing the decision-making process, the system maintains situational awareness even under dynamic environmental changes or partial drone failures.
A key feature of our approach is the integration of predictive movement models, which allow drones to anticipate the trajectories of potential airborne threats. Rather than merely reacting to new detections, the swarm can proactively intercept suspicious objects before they enter critical zones. Furthermore, collision avoidance is handled through a multi-agent learning framework, ensuring that drones operate safely in crowded or otherwise constrained airspace. Together, these elements create a highly scalable and robust detection network, reducing the latency in identifying and tracking emerging targets.

2.3. Detection of Hazardous Materials Using X-Ray Inspection Systems

While the drone swarm can locate airborne objects rapidly, it remains crucial to determine whether these objects contain hazardous materials. Traditional detection methods often rely on physical intervention or manual inspection, which increases operational risk and prolongs response times. To address these concerns, our research employs an advanced X-ray inspection system augmented by convolutional neural networks (CNNs). This system analyzes density and compositional features in scanned objects, detecting anomalies that may indicate concealed explosives, chemicals, or other threats.
In contrast to threshold-based X-ray techniques, CNN-based classification can recognize complex patterns that signify hidden dangers, thus reducing both false positives and false negatives. Multi-view image fusion further enhances accuracy by reconstructing a three-dimensional representation of each scanned object, enabling a more detailed analysis of overlapping or occluded materials. Through this approach, security personnel can quickly evaluate suspicious items without direct contact, thereby minimizing potential harm to human operators [14].

2.4. Real-Time Data Visualization Technology

Even with accurate detection and classification, the sheer volume of data generated by swarm drones and X-ray scanners can overwhelm security personnel if not presented effectively. To address this bottleneck, we develop a real-time data visualization platform that consolidates drone flight trajectories, detection alerts, and threat assessments into a unified, interactive interface [15]. By overlaying detection points and risk scores onto a three-dimensional map, operators gain immediate situational awareness, facilitating prompt decision-making.
In addition, automated reporting features compile relevant logs, risk assessments, and recommended actions into streamlined summaries. These summaries can be securely shared among distributed teams or command centers, ensuring consistent information flow and reducing communication delays. This holistic visualization approach not only improves operational efficiency but also supports collaborative planning and rapid coordination during high-stress security events.

2.5. Ethical and Legal Considerations for AI-Driven Drones

While technological advancements in drone swarms and AI-powered detection systems offer significant advantages, they also raise critical ethical and legal questions. One of the foremost concerns is the potential for misidentification, where an autonomous system could erroneously classify a civilian or nonthreatening entity as a hostile target. Such errors may lead to unintended escalation or collateral damage, thus undermining public trust and violating international humanitarian principles.
Existing international regulations provide a general framework for the use of force, yet they offer limited clarity on fully autonomous lethal decision-making. To address these uncertainties, our system adopts a human-in-the-loop design, requiring human operators to approve final engagement decisions. This hybrid approach leverages AI’s speed and analytical strengths while preserving accountability and moral responsibility. By implementing robust oversight mechanisms, we aim to harness the efficiency gains of AI-driven drones without compromising ethical standards or legal obligations [16].

3. Methodology

3.1. Research Design

In this study, we propose a defense system that integrates AI-driven automation and drone technology to effectively counter emerging airborne threats, particularly balloon-based attacks. The system operates through a structured sequence of operations: (1) Target Detection and Identification, (2) Target Capture, (3) Scanning, (4) Observer, (5) Share on Webpage, and (6) Decision and Supervisor. The overall system workflow is illustrated in Figure 1.
The system continuously monitors designated areas, particularly major national facilities and forward military units, for potential airborne threats. Once a suspicious object is detected, AI-powered detection drones autonomously classify and track the target, transmitting real-time images and assessment results to the Observer. The Observer, beyond simply receiving notifications, plays a critical role in determining whether an emergency shootdown is necessary or if a safe capture is feasible, considering legal, ethical, and operational constraints. If the balloon is suspected of carrying hazardous materials such as toxic chemicals or explosives, the Observer may consult additional agencies to determine the appropriate course of action.
Following this verification, the system proceeds to the Target Capture phase, where a coordinated team of drones equipped with specialized capture mechanisms securely retrieves the airborne object. Once the target is successfully captured, it is transported to a designated Scanning zone, where X-ray imaging and AI-driven analysis are used to inspect the contents for potential threats.
All captured data, including detection logs, scanning results, and risk assessments, are securely transmitted to a Share on Webpage platform, ensuring real-time accessibility for relevant institutions and security agencies. Meanwhile, the Decision and Supervisor phase oversees the evolving threat situation, facilitating high-level decision-making on hazardous material disposal, escalation protocols, or further security measures.
To ensure operational robustness, all detection images and video footage are automatically stored for post-event analysis. This allows the system to reprocess missed or ambiguous detections, enhancing AI model accuracy and improving future detection reliability.
By integrating automated detection, controlled capture, AI-assisted scanning, and real-time web-based information sharing, the proposed system establishes a hybrid security framework that leverages the efficiency of AI while preserving human oversight for critical decision-making. This structured approach ensures that every phase, from initial detection to final resolution, is executed with precision, transparency, and operational accountability.

3.2. Target Detection and Identification

To detect and identify balloons in real time, we developed a system based on a deep learning-driven YOLO model. The training dataset consisted of images depicting balloons and potential threat objects such as explosives, collected under diverse environmental conditions—including varying times of day and weather scenarios—to enhance robustness in real-world deployments. To improve the model’s generalization capability, data augmentation techniques such as flipping, rotation, and brightness adjustments were applied. The dataset was then split into training and testing subsets, and the YOLOv8 model was trained using TensorFlow.
The dataset used for training and evaluating the YOLOv8 model comprised a total of 10,000 images, sourced from both publicly available repositories and custom-generated drone footage. Specifically, 5000 images contained labeled instances of balloons carrying various payloads, including paper waste, plastic materials, and simulated explosive packages. The remaining 5000 images represented non-threat environments, such as birds, clouds, kites, and benign airborne objects, to improve the model’s ability to distinguish threats from harmless objects.
To enhance environmental robustness, data were collected under four distinct weather conditions: clear (35%), cloudy (25%), rainy (20%), and nighttime (20%), as determined by the metadata of each image. Additional augmentation techniques—including horizontal and vertical flipping, random brightness, Gaussian noise, and motion blur—were applied to simulate UAV motion and real-world disturbances.
The dataset was randomly split into 80% for training (8000 images) and 20% for testing (2000 images). After training, performance metrics were computed across all weather subsets. The system achieved an average precision (mAP@0.5) of 91.3%, a recall of 88.9%, and a false positive rate (FPR) of 4.6%. However, performance under rainy and nighttime conditions showed a mild degradation, with an FPR increase to 6.2% and a slight drop in recall to 85.7%, indicating areas for further model tuning. These results are summarized in Table 3.
The overall process, from data preparation to model training, is illustrated in Figure 2, titled “Data preprocessing and training workflow for the balloon detection model”.
YOLO (You Only Look Once) is a widely adopted object detection architecture known for its ability to perform real-time inference with high accuracy. Among its multiple versions, YOLOv8—developed and maintained by Ultralytics—was selected in this study due to its balanced trade-off between detection performance, computational efficiency, and deployment readiness. Unlike previous versions such as YOLOv3 and YOLOv4, which offered high detection accuracy but required considerable computational resources, YOLOv8 introduces architectural enhancements that enable efficient operation on embedded systems. Furthermore, YOLOv8 is compatible with deployment frameworks such as ONNX, TensorRT, and OpenVINO, making it particularly well suited for drone-based platforms operating under resource constraints.
Recent research has validated YOLOv8’s effectiveness in detecting small and low-altitude aerial objects in real-time scenarios. In particular, Reis et al. [17] demonstrated the model’s strong performance in identifying flying targets under dynamic conditions, supporting its applicability in public safety and aerial surveillance tasks. Based on this validation and its ecosystem maturity, YOLOv8 was adopted in this study as the core detection model to ensure timely and accurate identification of balloon-borne threats.
While no architectural modifications were made to the YOLOv8 pipeline itself, several hyperparameters were empirically tuned to optimize detection performance for balloon-borne threats. Specifically, the initial learning rate was reduced from 0.01 to 0.005 to stabilize early training, the confidence threshold was increased from 0.25 to 0.3 to suppress false positives, and the IoU threshold was slightly decreased from 0.5 to 0.45 to improve bounding-box precision for overlapping targets. These settings were selected based on iterative validation experiments and contributed to more robust performance under diverse environmental conditions.
After training, the model was saved as yolov8_model.h5 and deployed for real-time balloon detection in footage captured by the detection drone’s camera, as illustrated in Figure 3, where the red box highlights a detected balloon containing hazardous debris. Furthermore, as shown in Figure 4, the system was designed to handle anomalies such as program errors, ensuring continuous object recognition even in the presence of unexpected disruptions [18].
In some instances, the AI model may fail to detect an object in real time due to environmental noise or model limitations. Moreover, certain cases require more thorough examination before final conclusions can be drawn. As shown in Figure 5, the system addresses these scenarios by storing both images and video footage for post-event analysis. This functionality ensures that missed or ambiguous detections can be reevaluated offline, using either the same YOLO model or more advanced or updated models if necessary. The collected data are stored in a big-data repository and can be used to further refine the model, thus enabling real-time improvements in AI performance [19,20].

3.3. Target Capture Using a Cooperative Drone System

Once a target is detected, a capture drone team consisting of four drones, as shown in Figure 6 collaborates to carry a capture net and move toward the location identified by the detection drone. The net is coated with butyl rubber adhesive, which instantly adheres upon contact with the balloon. The unique elasticity and flexibility of this adhesive ensure a stable bond even with limited contact area. Moreover, it exhibits excellent resistance to temperature fluctuations and other environmental changes, making it indispensable for outdoor balloon capture operations. By retaining its stickiness over repeated uses, the adhesive significantly increases both success rate and operational efficiency [21].
As shown in Figure 7, the cooperative drone system helps prevent collisions while improving the capture success rate. Upon securing the balloon, the drones notify the Observer with a message such as “Drones have captured the target with the net”. Then, as illustrated in Figure 8, the system plots each drone’s flight trajectory and stores the data in a database for future missions [22,23].

Influence of Drone Coordination Parameters on System Performance

To clarify the effects of key operational settings on overall system behavior, this subsection discusses how specific coordination parameters influenced detection accuracy, capture reliability, and safety. Rather than introducing additional experiments, the insights presented here are derived from the operational outcomes already described in Section 3.3. The decision to deploy four drones in the capture phase was based on achieving an effective balance between spatial coverage and coordination complexity. Configurations involving six or more drones resulted in diminishing improvements in capture success rate while increasing the likelihood of communication interference and path conflicts, which are known concerns in multi-agent control systems [13].
Inter-drone communication frequency was configured at 10 Hz, a value selected to ensure timely updates of positional states without overloading onboard processors. In practice, this setting maintained sufficient temporal resolution for coordinated interception. Tests with lower frequencies (e.g., 1–5 Hz) led to synchronization delays and elevated risk of mid-air interference, confirming the importance of communication responsiveness. The drones operated at altitudes between 30 and 50 m. This range offered a practical compromise between field-of-view coverage and visual resolution. Higher altitudes provided broader coverage but degraded detection clarity due to reduced pixel density and atmospheric distortion. Lower altitudes, while yielding sharper images, were less suitable in environments with dense obstacles or limited visibility.
Coordination among drones was managed by a decentralized control algorithm that prioritized proximity and target motion. This adaptive allocation strategy allowed for dynamic role reassignment within the swarm, enabling flexible response to multiple simultaneous threats. Local avoidance logic, based on real-time feedback of relative positions, ensured safe maneuvering during collaborative tasks. In summary, the combined influence of drone count, communication frequency, flight altitude, and coordination logic played a critical role in achieving reliable performance. These parameters were not only chosen based on prior UAV literature but also refined through iterative validation during the system’s deployment. The resulting configuration demonstrated strong stability and operational effectiveness without requiring excessive complexity or resource consumption.

3.4. Hazard Analysis Using X-Ray Inspection

Captured balloons underwent further scrutiny via an X-ray inspection system to detect any hazardous materials. A deep learning-based image classification model, capable of identifying complex patterns with high accuracy, was employed. As illustrated in Figure 9, a convolutional neural network (CNN) was used to extract features from X-ray images and determine whether dangerous substances were present [24].
Unlike the YOLOv8 model used for aerial target detection, this CNN module was an independent component specifically designed for hazard classification based on X-ray images. It did not serve as the backbone of YOLO but instead complemented the detection pipeline by analyzing internal balloon contents after capture.
This design choice was motivated by the practical requirements of defense and public safety applications, where rapid inference and low computational overhead are critical. Accordingly, the CNN structure was kept lightweight to enable swift deployment and real-time hazard analysis on edge devices.
CNNs utilize multiple convolution and pooling layers to analyze the visual characteristics of input images, enabling precise classification. TensorFlow and Keras were used to manage large datasets and provide an intuitive interface for model definition and training. As shown in Figure 10, the system performs binary classification to distinguish “Hazardous Material Detected” from “Safe”, using a color-coded output (green for safe, red for hazard) to quickly convey the balloon’s threat status.
Finally, as illustrated in Figure 11, the webpage displays text indicating either “Hazardous Material Detected” or “Safe”, providing clear and concise information to authorized users.

3.5. Real-Time Data Visualization and Web Integration

As illustrated in Figure 12, all data related to target identification, detection, drone capture, and hazardous material inspection are visualized in real time and seamlessly shared through Flask, a lightweight yet powerful web framework.
Depending on security requirements, this system can be deployed over the Internet or within closed internal networks, ensuring both flexibility and security in various operational environments. This adaptability allows for rapid and accurate responses in both civilian and military applications. Furthermore, by enabling administrators to remotely monitor inspection processes at any time and from any location, the visualization component significantly enhances decision-making speed and operational efficiency.

3.6. Decision Support and Supervision

Based on the visualized data, as shown in Figure 13, a Supervisor evaluates the overall situation and decides on any necessary countermeasures. The detection and analysis results shared on the webpage enable rapid yet well-informed decisions in dynamically evolving threat scenarios. Once a decision is made, the corresponding instructions are immediately communicated to the drone system, triggering additional detection, capture, or analysis procedures as needed. This human-in-the-loop(i.e., human-supervised control) structure ensures that crucial judgments such as disposing of hazardous payloads or escalating to higher authority remain under accountable, human supervision, thus reinforcing both operational effectiveness and ethical responsibility [25].

3.7. Implementation Details and Real-Time Considerations

Beyond the core modules described above, the system is designed to sustain real-time performance and reliable multi-drone collaboration, while allowing for necessary human oversight in critical scenarios. In practice, the detection and X-ray inspection tasks can run on high-performance workstations or edge devices, depending on deployment constraints. Swarm drones exchange mission-critical information through a lightweight, low-latency communication protocol, which updates each drone on target positions and flight statuses. This approach ensures coordinated maneuvers, prevents collisions, and enhances the accuracy of predictive movement models used to intercept moving targets [26].
Furthermore, although the system automates most detection and capture processes, the Observer and Supervisor maintain authority in high-risk events, such as when lethal force or urgent disposal may be necessary. This governance mechanism aligns with broader ethical and legal guidelines to minimize false engagements and guarantee human accountability. By integrating real-time visualization, efficient communication among drones, and human-in-the-loop supervision, the proposed methodology provides a balanced foundation that can be scaled to diverse operational contexts, ensuring both safety and adaptability [27].

4. Results and Discussion

4.1. Performance Evaluation Environment and Parameter Settings

The performance evaluation of this study was conducted in a controlled computing environment. The experimental setup consisted of an Intel(R) Core(TM) i9-13900H 2.60 GHz processor, 32 GB of RAM, running on a Windows 11 64-bit operating system. The primary development tools used included Visual Studio Code 1.93.1, UiPath 2024.10.5 Community Edition, DB Browser for SQLite 3.13.0, and Anaconda Navigator 2.4.0.
Due to various constraints, conducting real-world experiments on airborne threats such as balloon-based terrorism was impractical. As a result, this study employed a simulation-based approach to evaluate the effectiveness of countermeasures. A more detailed discussion of these constraints and limitations is provided in Section 4.4.
Over a ten-month period (May 2024–February 2025), experiments were performed under four distinct weather conditions: clear, rainy, cloudy, and night. To analyze performance variations across these conditions, weightings of one to four were assigned according to situational complexity. The training dataset consisted of approximately 5000 images labeled for detecting contaminated balloons, with an additional 5000 images for detecting hazardous materials. Under standard atmospheric conditions (air density = 1.225 kg/m3, air resistance coefficient C d = 1.2, gravitational acceleration = 9.8 m/s2), the surface area of the objects inside the balloon was estimated at 0.06237 m2, and the weight of the attached waste was set at 5 g [28].

4.2. Analysis of Impact Range After Shootdown and Capture

Simulations were conducted to compare the impact range between shooting down the balloon and using a safe capture method. As shown in Figure 14, the impact range following balloon shootdown was approximately 16 times larger than that of the capture method, offering a quantitative basis for assessing the physical repercussions of each approach [29].
The balloon used in this experiment was defined to represent average characteristics of actual balloons in terms of size and internal materials, along with standard atmospheric conditions. Specifically:
  • Size of waste attached to the balloon: 210 mm × 297 mm (Area A = 0.06237 m 2 )
  • Weight of waste attached to the balloon: 5 g (Mass m = 0.005 kg )
  • Air density: ρ = 1.225 kg / m 3
  • Drag coefficient: C d = 1.2
  • Gravitational acceleration: g = 9.8 m / s 2
  • Wind speed: v wind = 3 m / s
When an object free-falls due to gravity, it eventually reaches terminal velocity ( v t ), where gravitational and aerodynamic forces balance. According to
v t = 2 m g ρ C d A ,
the terminal velocity of the attached waste was approximately 1.03 m / s . From a 150 m drop, the debris would thus take about 145.63 s to reach the ground; with a wind speed of 3 m/s, it would disperse horizontally by around 436.89 m.
The total impact area for the aerial shootdown method can be approximated by assuming a circular dispersion model:
A shootdown = π R 2 = π × ( 436.89 ) 2 599,180.6 m 2 .
Since the actual debris dispersion pattern is influenced by wind variations and local turbulence, this value represents an upper bound of the expected impact area. In practical scenarios, the debris distribution may follow a more elliptical shape or exhibit asymmetric spread depending on atmospheric conditions.
By contrast, the capture method restricted the impact range to about 1.25 m 2 , since the balloon was retrieved in a controlled manner.

Considerations on Impact Area Calculation

While the circular dispersion model provides an upper estimate, real-world conditions may yield a smaller affected region. A more conservative estimation can be made by assuming an elliptical spread based on a wind-influenced trajectory:
A elliptical = π × R x × R y 2 ,
where R x represents the maximum wind-driven dispersion (436.89 m) and R y is the vertical projection of the fall trajectory, which can be estimated as half of R x . This results in an adjusted impact area:
A elliptical π × 436.89 × 218.44 2 149,795.15 m 2 .
These results indicate that shooting down a balloon, especially one containing hazardous substances, significantly increases the risk of environmental contamination and debris spread. If a shootdown is unavoidable, additional mitigation measures such as suppression techniques should be considered to limit dispersion prior to ground impact. Meanwhile, safe capture remains preferable in scenarios involving potentially dangerous payloads, offering a less disruptive approach for collecting and disposing of contaminated balloons [30].

4.3. Comparison and Discussion with the Existing System

Existing systems, which primarily relied on manual processes, exhibited considerable variability in both detection and capture, particularly regarding response times that can span minutes to hours. The proposed automated system was designed to overcome these shortcomings, and experimental results, as illustrated in Figure 15, confirmed its superior performance in terms of detection accuracy, capture success rate, false positive rate, and response time compared to manual methods.
The automated system consistently delivered higher detection accuracy, reducing the likelihood of oversight, and achieved a notably higher capture success rate, ensuring more reliable threat neutralization. Moreover, a reduced false positive rate implies fewer wasted interventions on benign objects, thus streamlining resource utilization. Critically, the system’s rapid response time under 10 s plays a vital role in countering swiftly evolving airborne threats.
In contrast to a manual system, where a single human operator often monitors a broad area, the proposed solution integrates real-time data visualization (Section 3.5) and multi-drone coordination, significantly lowering the operator’s workload. The “human-in-the-loop” design ensures that ethical or legal concerns such as deciding to engage in lethal action remain under accountable human supervision. Consequently, the system not only minimizes environmental hazards (e.g., from shooting down a balloon containing toxic substances) but also enhances public safety through controlled and efficient operations.
Overall, these findings underscore the advantages of an AI-driven, automated approach in drone-based threat response, particularly against emerging challenges like balloon-borne terrorism.
Comparison with Traditional Threat Detection Systems
While the proposed AI-based drone defense system demonstrated significant advantages in addressing balloon-borne threats, it was essential to conduct a structured comparison with conventional detection and response systems, including radar, optical sensors, and RF-based detectors.
Table 4 highlights the relative strengths and limitations of conventional systems compared to the proposed solution. While radar, optical, and RF-based systems exhibit strong performance against traditional military threats, they tend to be less effective when facing slow-moving, low-cost, or non-metallic threats such as balloon-borne payloads. Furthermore, these systems typically require extensive infrastructure and are not easily deployable for localized or spontaneous events.
By contrast, the proposed AI-based drone defense system offers real-time adaptability, modular deployment, and intelligent threat evaluation. Its autonomous architecture enables rapid responses in environments where fixed installations are impractical. Moreover, the integration of deep learning for object detection and X-ray inspection enhances situational awareness and operational efficiency.
From both a performance and resource management perspective, this system serves as a practical alternative—or complementary addition—to conventional detection infrastructure, particularly in addressing emerging asymmetrical threat scenarios.

4.4. Limitations and Future Research Directions

While this study demonstrates the effectiveness of the proposed system across multiple weather conditions, several limitations must be acknowledged.
First, real-world experiments involving airborne threats such as balloon-based terrorism were constrained by safety and geopolitical considerations. Conducting tests in civilian areas posed significant risks of collateral damage, and the potential for debris dispersal into sensitive military or diplomatic zones introduced further operational challenges. Consequently, a controlled simulation-based approach was adopted, incorporating real-world meteorological data, geographic parameters, and trajectory modeling to enhance the realism and applicability of the findings.
Second, although the simulation environment closely approximated real-world conditions, certain inherent limitations remained. Aerodynamic modeling, wind turbulence effects, and variations in balloon flight paths introduced uncertainties that could not be entirely mitigated within a simulated framework. Future studies should prioritize field experiments in controlled settings to further refine the system’s robustness under dynamic environmental conditions.
Third, although the proposed system demonstrated strong overall performance across diverse weather conditions, the quantitative analysis revealed some degradation in adverse environments. For instance, the false positive rate increased from 3.8% under clear conditions to 6.2% during rainy weather, while recall slightly dropped from 90.2% to 85.7%. These observations indicate that while the model remains generally robust, targeted enhancements are needed to maintain accuracy in extreme weather scenarios.
Fourth, the parameters and experimental datasets utilized in this research were limited by the relatively few documented instances of actual balloon-based threats, as comprehensive identification and verification of such incidents remain challenging due to practical constraints and limited monitoring capabilities. As a result, experimental variables, including balloon material properties, payload characteristics, and types of contamination, were chosen based on available verified data, reflecting inherent practical restrictions. Future research would benefit from expanded access to systematically collected real-world data, thereby enhancing the reliability and applicability of the findings to broader scenarios.
Fifth, the study primarily assessed predefined threat scenarios with known parameters, which may limit its ability to generalize across diverse operational settings. In reality, threat conditions are highly dynamic and often unpredictable, necessitating a more adaptive approach. Future research should explore AI-driven threat assessment models capable of dynamically analyzing and responding to real-time variations in adversarial behavior, wind conditions, and unexpected aerial intrusions. The integration of reinforcement learning algorithms may further enhance system adaptability, enabling autonomous drones to continuously optimize their responses based on evolving threats.
Sixth, real-world deployment of drone swarms introduces additional operational challenges, particularly in maintaining stable and low-latency communication among units. In densely built or electromagnetically noisy environments, inter-drone communication can suffer from signal attenuation, congestion, or unexpected latency. Such issues may disrupt coordinated maneuvers or delay threat engagement responses, thereby degrading overall system performance. While this study assumes ideal communication conditions within its simulation framework, future work should incorporate network modeling and testing under adverse scenarios to validate the system’s resilience. Exploring decentralized control mechanisms and fault-tolerant communication protocols will be essential to ensuring reliable swarm operations in dynamic and potentially adversarial environments.
Seventh, ethical considerations and accountability remain critical challenges for autonomous response systems, even with the presence of an observer mechanism. The system incorporates an observer framework that supervises and verifies AI-driven decisions, thereby reducing the risk of unintended interventions. However, ethical concerns persist, particularly regarding false positives in threat detection, which could result in unnecessary engagements or collateral consequences. Additionally, while the observer mechanism enhances system transparency and oversight, ongoing advancements are required to further refine its ability to detect, verify, and mitigate potential errors. Future research should focus on optimizing the observer role to ensure that AI-driven responses align with established operational protocols while maintaining accountability and reliability.
Finally, it is important to emphasize that this study did not attempt to present a fully optimized or finalized system but rather proposed a novel and practical framework for addressing balloon-based airborne threats—an emerging and underexplored area of concern. Given the sensitive and unpredictable nature of national security operations, such systems are best refined through real-world deployment and iterative operational feedback. This field-based adaptation process is essential to ensuring long-term effectiveness, stability, and scalability across diverse scenarios. Accordingly, future work should focus on field validation and progressive tuning to align the system’s performance with dynamic threat environments and institutional requirements.
Despite these challenges, the practical implications of this research remain highly significant. The system’s capability to autonomously identify, assess, and respond to airborne threats provides considerable operational advantages in security and defense applications. Furthermore, this study addresses a critical academic gap in AI-driven autonomous aerial threat response. By bridging this gap, future advancements in this field can contribute to the development of a more resilient, adaptive, and deployable security framework, facilitating the effective integration of autonomous countermeasures across diverse operational environments.

4.5. Dual-Use Research and Ethical Compliance

This research is confined to the field of computer engineering and aims to enhance public safety and technical preparedness. It does not pose a threat to public health or national security. The authors acknowledge the dual-use potential of autonomous drone technologies and confirm that all necessary ethical considerations have been taken into account to prevent misuse. In line with our ethical responsibility, we strictly adhere to national and international regulations concerning Dual-Use Research of Concern (DURC). We advocate for responsible development, regulatory compliance, transparent reporting, and the ethical application of emerging technologies to ensure positive outcomes for society.

5. Conclusions

This work introduced an automated drone defense system tailored to address balloon-based asymmetric threats, an area that has received comparatively little scholarly attention despite its mounting global relevance. By seamlessly integrating AI-driven detection (YOLO), real-time swarm coordination, and non-destructive X-ray inspection (CNN-based) for hazardous materials, the proposed framework significantly improved upon conventional, manually operated methods in terms of speed, accuracy, and operational safety. Experimental data supported these claims, showing a notable boost in detection accuracy (92.7%), capture success rate (95.4%), and average response time (7.8 s) all of which far exceeded the performance of traditional systems. Additionally, the safe capture mechanism reduced environmental and collateral damage by approximately one-sixteenth relative to conventional shootdown tactics, representing a critical advancement in public safety measures.
From an academic perspective, this study made three key contributions. First, it addressed a largely overlooked research gap in drone-based countermeasures to novel aerial threats, specifically balloon-borne hazards capable of carrying explosives or toxic agents. Second, it validated the feasibility of deep learning approaches, notably YOLO and CNN architectures, in high-risk operational settings requiring stringent real-time constraints. Third, it underscored the importance of a human-in-the-loop design, whereby ethical or lethal decisions remain under direct human supervision, thereby balancing autonomy with accountability in security contexts.
The significance of this research extends beyond regional security incidents; balloon-based terrorism embodies an emerging form of asymmetric warfare that demands rapid and precise intervention. By demonstrating robust performance across diverse weather conditions and providing direct integration with existing surveillance infrastructures (e.g., CCTV), the proposed system offers a scalable template for broader defense and public safety applications. Particularly in densely populated environments where debris dispersion can have grave environmental or societal consequences, the introduced safe-capture concept provides a tested alternative to indiscriminate shootdown approaches.
Nevertheless, this study also highlights several important limitations such as the need for more extensive field tests in extreme environmental conditions and the focus on single-threat scenarios that should serve as a foundation for ongoing research. Future work may further investigate multi-target engagements, refine swarm intelligence algorithms for dynamic threat allocation, and explore model compression techniques for effective on-board processing within drone units. Such endeavors are imperative to fully realize the potential of autonomous drone defenses, especially as asymmetric threats involving unmanned or minimally detectable aerial devices continue to evolve on a global scale.
In sum, this research makes a timely and substantive contribution to an emerging field by closing a critical knowledge gap in AI-enhanced drone systems for asymmetric threat mitigation. The demonstrated real-time detection, safe-capture methodology, and integrated hazard analysis form a coherent and extensible framework that can adapt to future security challenges. By ensuring rapid responsiveness, ethical oversight, and environmental safety, the present work not only advances the academic discourse on unmanned aerial vehicle (UAV) operations but also charts a viable path forward for real-world applications in both national security and civilian domains. The findings herein should thus serve as a catalyst for subsequent innovations, including more advanced swarm coordination, regulatory alignment, and expanded collaborations with policymakers, ultimately fortifying global counter-terrorism and public safety efforts.

Author Contributions

Conceptualization, J.K.; methodology, J.K.; software, J.K.; validation, J.K. and I.J.; formal analysis, J.K.; investigation, J.K.; resources, J.K.; data curation, J.K.; writing original draft preparation, J.K.; writing review and editing, J.K.; visualization, J.K.; supervision, I.J.; project administration, I.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Caballero-Martin, D.; Lopez-Guede, J.M.; Estevez, J.; Graña, M. Artificial Intelligence Applied to Drone Control: A State of the Art. Drones 2024, 8, 296. [Google Scholar] [CrossRef]
  2. Malhotra, R.; Singh, P. Recent Advances in Deep Learning Models: A Systematic Literature Review. Multimed. Tools Appl. 2023, 84, 44977–45060. [Google Scholar] [CrossRef]
  3. Kim, J.; Kim, S.H.; Joe, I. AI-Based RPA’s Work Automation Operation to Respond to Hacking Threats Using Collected Threat Logs. Appl. Sci. 2024, 14, 10217. [Google Scholar] [CrossRef]
  4. Hildmann, H.; Kovacs, B. Using Unmanned Aerial Vehicles (UAVs) as Mobile Sensing Platforms (MSPs) for Disaster Response, Civil Security and Public Safety. Drones 2019, 3, 59. [Google Scholar] [CrossRef]
  5. Wojciechowski, S. The Spirit of Terrorism–Its Contemporary Evolution and Escalation. Przegląd Strateg. 2023, 14, 9–22. [Google Scholar] [CrossRef]
  6. Sayler, K.M. Artificial Intelligence and National Security. Przegląd Strateg. 2019, 38, 1–38. [Google Scholar]
  7. Babu, C.S.; Akshara, P.M. Virtual Threats and Asymmetric Military Challenges. In Cyber Security Policies and Strategies of the World’s Leading States; Bada, M., Creese, S., Eds.; IGI Global: Hershey, PA, USA, 2023; pp. 49–68. [Google Scholar]
  8. Schäfer, P.J. Reconfiguring International Security–The Strategic Evolution of Modern Warfare. Z. Außen- und Sicherheitspolitik 2025, in press. [Google Scholar]
  9. Redmon, J.; Romero-González, J.A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  10. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 37, 1680–1716. [Google Scholar] [CrossRef]
  11. Wang, Y.; Lu, Q.; Ren, B. Wind Turbine Crack Inspection Using a Quadrotor with Image Motion Blur Avoided. IEEE Robot. Autom. Lett. 2023, 8, 1069–1076. [Google Scholar] [CrossRef]
  12. Liu, M.; Wang, X.; Zhou, A.; Fu, X.; Ma, Y.; Piao, C. UAV-YOLO: Small Object Detection on Unmanned Aerial Vehicle Perspective. Sensors 2020, 20, 2238. [Google Scholar] [CrossRef] [PubMed]
  13. Javaid, S.; Saeed, N.; Qadir, Z.; Fahim, H.; He, B.; Song, H.; Bilal, M. Communication and Control in Collaborative UAVs: Recent Advances and Future Trends. IEEE Trans. Intell. Transp. Syst. 2023, 21, 5719–5739. [Google Scholar] [CrossRef]
  14. Sarai, W.; Monbut, N.; Youngchoay, N.; Phookriangkrai, N.; Sattabun, T.; Siriborvornratanakul, N. Enhancing Baggage Inspection Through Computer Vision Analysis of X-ray Images. J. Transp. Secur. 2024, 17, 1. [Google Scholar] [CrossRef]
  15. Grinberg, M. Flask Web Development; O’Reilly Media: Sebastopol, CA, USA, 2018. [Google Scholar]
  16. Steen, M.; van Diggelen, J.; Timan, T.; van der Stap, N. Meaningful Human Control of Drones: Exploring Human-Machine Teaming, Informed by Four Different Ethical Perspectives. AI Ethics 2023, 3, 281–293. [Google Scholar] [CrossRef]
  17. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv 2023, arXiv:2305.09972. [Google Scholar]
  18. Yan, H.; Wang, J.; Kong, X.; Tomiyama, H. YOLO-ELD: Efficient and Lightweight Detection for UAV Aerial Imagery. In Proceedings of the 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Tokyo, Japan, 6–9 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 4151–4156. [Google Scholar]
  19. Shi, Y.; Wang, N.; Guo, X. YOLOV: Making Still Image Object Detectors Great at Video Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; AAAI Press: Palo Alto, CA, USA, 2023; pp. 2254–2262. [Google Scholar]
  20. Jana, A.P.; Biswas, A.; Siriborvornratanakul, N. YOLO-Based Detection and Classification of Objects in Video Records. In Proceedings of the IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 18–19 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2448–2452. [Google Scholar]
  21. Lee, S.R.; Nghia, D.X.; Oh, J.Y.; Lee, T.I. Adhesion Strength Enhancement of Butyl Rubber and Aluminum Using Nanoscale Self-Assembled Monolayers of Various Silane Coupling Agents for Vibration Damping Plates. Nanomaterials 2024, 14, 1480. [Google Scholar] [CrossRef] [PubMed]
  22. Tang, J.; Duan, H.; Lao, S. Swarm Intelligence Algorithms for Multiple Unmanned Aerial Vehicles Collaboration: A Comprehensive Review. Artif. Intell. Rev. 2023, 56, 4295–4327. [Google Scholar] [CrossRef]
  23. Zhou, W.; Liu, Z.; Li, J.; Xu, X.; Shen, L. Multi-Target Tracking for Unmanned Aerial Vehicle Swarms Using Deep Reinforcement Learning. Neurocomputing 2021, 466, 285–297. [Google Scholar] [CrossRef]
  24. Morris, T.; Chien, T.; Goodman, E. Convolutional Neural Networks for Automatic Threat Detection in Security X-ray Images. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 285–292. [Google Scholar]
  25. Caton, J.L. Autonomous Weapon Systems: A Brief Survey of Developmental, Operational, Legal, and Ethical Issues; U.S. Army War College: Carlisle, PA, USA, 2015; pp. 1–25. [Google Scholar]
  26. Muhammad, S. Modeling Operator Performance in Human-in-the-Loop Autonomous Systems. IEEE Access 2021, 9, 102715–102731. [Google Scholar] [CrossRef]
  27. Amoroso, D.; Tamburrini, G. Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues. Curr. Robot. Rep. 2020, 1, 187–194. [Google Scholar] [CrossRef]
  28. Buček, S. Falling Objects and Projectile Motion with Regard to Air Resistance. In Proceedings of the 2016 International Conference on Applied Physics and Engineering (ICAPE), Prague, Czech Republic, 10–12 June 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 8243–8249. [Google Scholar]
  29. Wu, Z.; Li, G. Study on the Influencing Factors of Falling Object Accidents in Urban Buildings. In Proceedings of the 2024 3rd International Conference on Engineering Management and Information Science (EMIS 2024), Chengdu, China, 12–15 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 425–433. [Google Scholar]
  30. Gluck, P. Air Resistance on Falling Balls and Balloons. Phys. Teach. 2003, 41, 178–180. [Google Scholar] [CrossRef]
Figure 1. Workflow of the AI-driven autonomous drone defense system.
Figure 1. Workflow of the AI-driven autonomous drone defense system.
Electronics 14 01553 g001
Figure 2. Data preprocessing and training workflow for the balloon detection model.
Figure 2. Data preprocessing and training workflow for the balloon detection model.
Electronics 14 01553 g002
Figure 3. Detection of balloons carrying hazardous materials.
Figure 3. Detection of balloons carrying hazardous materials.
Electronics 14 01553 g003
Figure 4. Fault-tolerant real-time detection workflow.
Figure 4. Fault-tolerant real-time detection workflow.
Electronics 14 01553 g004
Figure 5. Real-time detection with video and image capture for offline analysis.
Figure 5. Real-time detection with video and image capture for offline analysis.
Electronics 14 01553 g005
Figure 6. Implementation of detection, identification, and capture swarm drone.
Figure 6. Implementation of detection, identification, and capture swarm drone.
Electronics 14 01553 g006
Figure 7. Data storage for analysis after successful capture of waste balloons.
Figure 7. Data storage for analysis after successful capture of waste balloons.
Electronics 14 01553 g007
Figure 8. Motion visualization of swarm drones.
Figure 8. Motion visualization of swarm drones.
Electronics 14 01553 g008
Figure 9. Training for hazardous material classification using CNN.
Figure 9. Training for hazardous material classification using CNN.
Electronics 14 01553 g009
Figure 10. Implementation of binary classification for threat level.
Figure 10. Implementation of binary classification for threat level.
Electronics 14 01553 g010
Figure 11. Alert upon identification of explosives and other hazardous materials.
Figure 11. Alert upon identification of explosives and other hazardous materials.
Electronics 14 01553 g011
Figure 12. Real-time visualization and situation dissemination via Web server.
Figure 12. Real-time visualization and situation dissemination via Web server.
Electronics 14 01553 g012
Figure 13. Real-time situation report images from drones.
Figure 13. Real-time situation report images from drones.
Electronics 14 01553 g013
Figure 14. Comparison of damage scope between safe-capture operations and aerial shootdown.
Figure 14. Comparison of damage scope between safe-capture operations and aerial shootdown.
Electronics 14 01553 g014
Figure 15. Comparative evaluation of detection accuracy, capture success rate, false positive rate, response time, and failure rate.
Figure 15. Comparative evaluation of detection accuracy, capture success rate, false positive rate, response time, and failure rate.
Electronics 14 01553 g015
Table 1. Advantages of drones: a detailed summary.
Table 1. Advantages of drones: a detailed summary.
AdvantageDescription
Surveillance and reconnaissanceEnables real-time aerial monitoring, providing extensive situational awareness while minimizing risks to personnel.
Risk minimizationReduces human exposure to hazardous situations, especially during search-and-rescue or dangerous material handling.
Rapid deployment and responseCan be quickly deployed to inaccessible or remote locations, thus shortening critical incident response times.
Cost efficiencyOffers a more economical alternative to traditional manned aircraft, with reduced operational and maintenance expenses.
Table 2. Comparison of real-time object detection technologies.
Table 2. Comparison of real-time object detection technologies.
ModelStrengthsLimitations
YOLOAchieves near real-time detection with high accuracy, making it ideal for security and defense applications requiring immediate response.The single-shot detection approach may struggle with occluded or small objects in cluttered environments.
SSDPerforms well on multi-scale object detection and maintains relatively low latency.Requires extensive hyperparameter tuning to achieve performance comparable to YOLO in complex scenarios.
Faster R-CNNProvides state-of-the-art accuracy through a region proposal mechanism.High computational cost significantly limits real-time usability.
EfficientDetUtilizes a scalable architecture to balance efficiency and detection strength.Requires careful model configuration and tuning for different operational contexts.
Table 3. Performance metrics by weather conditions.
Table 3. Performance metrics by weather conditions.
ConditionPrecision (%)Recall (%)FPR (%)mAP@0.5 (%)
Clear93.190.23.892.4
Cloudy91.789.04.291.1
Rainy89.085.76.288.9
Nighttime87.585.85.889.0
Average90.388.94.691.3
Table 4. Comparison of traditional detection systems and the proposed AI-based drone solution.
Table 4. Comparison of traditional detection systems and the proposed AI-based drone solution.
System TypeAdvantagesLimitations
Radar systemsEffective at long range, accurate for high-speed, metallic objectsLow sensitivity to slow or non-metallic targets, high cost and infrastructure-dependent
Optical sensorsHigh-resolution imaging, useful in clear weather and daylightPoor performance in low visibility, limited field of view
RF sensorsDetects active electronic signalsFails with passive or low-tech threats like balloons, prone to signal interference
Proposed AI-drone systemReal-time detection and capture, scalable, autonomous, cost-effective, payload analysis via X-ray CNNEnvironmental condition limitations, needs tuning for adverse weather
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.; Joe, I. Deep Learning-Based Drone Defense System for Autonomous Detection and Mitigation of Balloon-Borne Threats. Electronics 2025, 14, 1553. https://doi.org/10.3390/electronics14081553

AMA Style

Kim J, Joe I. Deep Learning-Based Drone Defense System for Autonomous Detection and Mitigation of Balloon-Borne Threats. Electronics. 2025; 14(8):1553. https://doi.org/10.3390/electronics14081553

Chicago/Turabian Style

Kim, Joosung, and Inwhee Joe. 2025. "Deep Learning-Based Drone Defense System for Autonomous Detection and Mitigation of Balloon-Borne Threats" Electronics 14, no. 8: 1553. https://doi.org/10.3390/electronics14081553

APA Style

Kim, J., & Joe, I. (2025). Deep Learning-Based Drone Defense System for Autonomous Detection and Mitigation of Balloon-Borne Threats. Electronics, 14(8), 1553. https://doi.org/10.3390/electronics14081553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop