Next Article in Journal
Research on Real-Path-Based UAV Distribution Center Layout in Urban Environments
Previous Article in Journal
RETRACTED: Bakhoum et al. Real Time Measurement of Airplane Flutter via Distributed Acoustic Sensing. Aerospace 2020, 7, 125
Previous Article in Special Issue
Relative Estimation and Control for Loyal Wingman MUM-T
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ROS-Based Multi-Domain Swarm Framework for Fast Prototyping

Aerospace Engineering Department, University of Seville, Camino de los Descubrimientos s/n, 41092 Sevilla, Spain
*
Authors to whom correspondence should be addressed.
Aerospace 2025, 12(8), 702; https://doi.org/10.3390/aerospace12080702
Submission received: 6 June 2025 / Revised: 30 July 2025 / Accepted: 31 July 2025 / Published: 8 August 2025

Abstract

The integration of diverse robotic platforms with varying payload capacities is a critical challenge in swarm robotics and autonomous systems. This paper presents a robust, modular framework designed to manage and coordinate heterogeneous swarms of autonomous vehicles, including terrestrial, aerial, and aquatic platforms. Built on the Robot Operating System (ROS) and integrated with C++ and ArduPilot, the framework enables real-time communication, autonomous decision-making, and mission execution across multi-domain environments. Its modular design supports seamless scalability and interoperability, making it adaptable to a wide range of applications. The proposed framework was evaluated through simulations and real-world experiments, demonstrating its capabilities in collision avoidance, dynamic mission planning, and autonomous target reallocation. Experimental results highlight the framework’s robustness in managing UAV swarms, achieving 100% collision avoidance success and significant operator workload reduction, in the tested scenarios. These findings underscore the framework’s potential for practical deployment in applications such as disaster response, reconnaissance, and search-and-rescue operations. This research advances the field of swarm robotics by offering a scalable and adaptable solution for managing heterogeneous autonomous systems in complex environments.

1. Introduction

The demand for adaptable frameworks capable of coordinating the collaboration of several platforms with different payload functionalities has increased in the constantly evolving field of robotics and autonomous systems. The presence of diverse ground-based rovers, aerial drones, and aquatic vehicles in heterogeneous robotic systems creates a complex and demanding environment that requires creative solutions to ensure efficient coordination and smooth integration [1]. This study presents an innovative swarming architecture that aims to tackle these difficulties by providing a comprehensive solution for controlling various robotic platforms loaded with different sorts of payloads.
The foundational principles of swarming—such as local interaction, decentralized control, and emergent coordination—were articulated by Parunak [2], who defined swarming as “useful self-organization through local interactions” and introduced digital pheromone models for collective behavior in distributed systems.
Swarm robotics is the field that investigates how a large number of simple, physically embodied robots can be created in a way that results in a desirable collective behavior arising from the interactions between these agents and their environment. Swarm technologies are inspired by biology. Collectives of avian, piscine, or insectile organisms possess the capacity to engage in productive endeavors that are beyond the capabilities of an individual or a non-collaborative assemblage. While individual members of a swarm may lack intelligence and efficiency, the collective behavior that arises from their interactions offers resilience, flexibility, and scalability [1]. Interactions occurring within a certain area between individuals and their environment have the potential to generate favorable collective behavior. A multi-agent system exhibits cooperative behavior, which is a specific type of collective behavior, with enhanced utility [3]. Swarms consist of a minimum of 50 self-organized, uniform unmanned air vehicles (UAVs) that carry out a mission by means of local interactions [4,5].
The swarm architecture should provide support for a strategy that involves a limited number of attacks that are small in scale, spread out across a vast area, and characterized by strong interconnections and periodic bursts of activity. The command and control (C2) architecture for swarm systems can be categorized as either orchestrated, centralized or hierarchical, or dispersed or decentralized [6]. Orchestrated control selects a temporary leader based on transitory factors such as location, condition, and mission circumstances. Leaders gather sensory input from other agents and distribute a consolidated representation. If leaders become incapacitated, they are replaced. The architecture exhibits a moderate level of robustness; however, it lacks the capability to manage larger or geographically dispersed swarms, and places a significant processing load on a single agent. Centralized control systems emulate military command and control structures, in which agents are organized in a hierarchical manner and tactical information is transmitted upwards through the chain of command. This hierarchical design is efficient in managing data flow, but lacks resilience and flexibility in dynamic environments that require prompt agent responses. The implementation of centralized swarm control requires a communication architecture that follows a hub-and-spoke model. This model restricts the autonomy of the swarm, hinders communication between agents, and introduces a single point of failure [7]. A distributed architecture is characterized by the absence of a central leader, with choices being made collectively by the agents through consensus. This design exhibits resilience and scalability; however, it necessitates a communication network with the capacity to accommodate larger volumes of data. Similar to other components of swarm system design, a hybrid command and control (C2) architecture can leverage the individual characteristics of each component. The Cooperative Engagement Capability anti-air warfare system of the US Navy utilizes distributed situational awareness data and orchestrated target selection [6]. The decentralized control structures of swarm UAS, such as auction-based techniques and implicitly derived single-agent solutions, have demonstrated favorable outcomes [7]. Wireless mesh communication networks have the potential to facilitate swarm UAS communication architecture [8,9], and researchers have discovered that finite state machines (FSMs) have the capability to depict the architectures of autonomous, unmanned systems involving many vehicles. FSM agents operate concurrently in multiple states. The agent’s state transitions are initiated by environmental conditions or occurrences. These structures are appropriate for building military swarm systems because they allow for deterministic configuration of states and triggers, for high-risk mission events such as targeting. In this paper we present a decentralized architecture with a star configuration for payload management.
In order to address these difficulties, our suggested swarming system utilizes three essential technologies: the Robot Operating System (ROS), C++, and ArduPilot. ROS offers a versatile and adaptable middleware that simplifies communication between different components, guaranteeing a smooth interchange of information across multiple platforms. Utilizing C++ guarantees the creation of a codebase that is both high-performing and efficient, which is essential for making real-time decisions and coordinating actions in dynamic situations. In addition, the incorporation of ArduPilot, a widely embraced open-source autopilot program, establishes a uniform control interface for various robotic platforms, hence improving interoperability and dependability.
To situate the present work within the current landscape of swarm robotics, it is important to compare the proposed multi-domain swarm system with recent open-source and academic frameworks designed for the coordination of robotic collectives. The proposed architecture, grounded in deterministic logic and real-world validation, offers a unique contribution by enabling scalable, decentralized, and domain-agnostic swarm operations with heterogeneous platforms. Below, we examine the comparative scope and features of relevant frameworks from both academic and open-source initiatives.
The Aerostack framework developed by [10] introduced a layered and component-based architecture for the development of autonomous aerial robotic systems. Its modular design facilitated high-level behaviors and autonomous decision-making in single- and multi-UAV setups. However, the system was primarily aimed at aerial robotics and simulation environments, with limited support for real-time multi-domain deployment.
Aerostack2 [11] extended this concept to ROS2, adding enhanced modularity and compatibility with PX4. It provided a robust foundation for behavior trees, mission-level planning, and task delegation in aerial swarms. Nevertheless, it remains tailored exclusively for UAV platforms and lacks native support for ground or surface robotics, limiting its use in heterogeneous swarm deployments. In contrast, the proposed framework supports fixed-wing UAVs, VTOLs, multirotors, UGVs, and hybrid systems under a unified communication and mission-planning layer, validated in both simulated and real-world scenarios.
SwarmUS [12] is a notable open-source hardware and software platform focused on distributed onboard sensing and communication for swarm robotics. It provides relative strong localization and inter-agent message passing capabilities, supporting collaborative behaviors. While highly valuable for early-stage swarm testing and educational use, SwarmUS does not offer mission-level autonomy, onboard planning, or integration with tactical GCS systems. Our system, in contrast, includes a ROS-based Ground Control Station, real-time vehicle diagnostics, state machines for mission governance, and high-bandwidth payload coordination.
HeRoSwarm [13] and Pobogot [14] both represent low-cost, open-hardware swarm platforms. HeRoSwarm supports ROS-based coordination in miniature swarm robots, while Pobogot emphasizes modularity and directional inter-robot communication. These platforms are well-suited for laboratory-scale experiments and rapid development cycles but are not designed for deployment with real UAV hardware or in field conditions.
In contrast, our system demonstrates scalability and robustness in real-world VTOL and fixed-wing UAV configurations, managing complex missions such as multi-objective planning, target reallocation, and collaborative surveillance. Moreover, it supports tactical operations with time-division multiplexed RF communication, GPS-based synchronization, and payload-specific tasking.
GenGrid [15] introduces a distributed environmental grid architecture for swarm experimentation, enabling researchers to simulate and log interaction-heavy behaviors under structured conditions. While GenGrid supports extensive logging and feedback from diverse agents, it is a testbed rather than a fully integrated framework. Our system, however, tightly integrates hardware, middleware, mission logic, and operator interfaces into a deployable and autonomous architecture for cooperative mission execution.
UAVros is an open-source set of ROS1/ROS2 packages that enables the simulation and partial control of UAV and UGV swarms using PX4 and Gazebo. It includes modules such as uavros_uav, ugv_sitl for aerial–ground cooperation, and uavros_multi_gimbal_sitl for multi-UAV target tracking with gimbals. While it offers good modularity and supports both aerial and ground vehicles, it is limited in hardware integration and lacks full support for autonomous decision-making and payload-aware mission assignment [16].
PX4 Swarm Controller is a ROS2-compatible leader–follower framework focused on multi-drone trajectory following and coordination using PX4. It emphasizes scalability and real-time swarm flight but does not support mission planning, advanced payload integration, or multi-domain expansion. The project was developed at École Centrale de Nantes as part of an R&D initiative [17].
ROS2swarm is a collection of ROS2 nodes for building decentralized swarm behaviors with plug-and-play logic modules [18]. It supports simulation of behaviors like flocking and formation and has been successfully demonstrated on TurtleBot3 and Jackal UGV platforms. However, it is currently limited to Gazebo simulation and lacks built-in GCS integration, mission inheritance, or hardware synchronization [19].
This paper introduces a modular swarm framework designed to address these challenges. Built on the Robot Operating System (ROS) and leveraging C++ and ArduPilot, the framework provides a unified approach to coordinating autonomous vehicles across multiple domains. Key features include the following:
  • Scalability: A modular design that supports the seamless addition of new platforms and payloads.
  • Interoperability: Integration with open-source tools and protocols, ensuring compatibility across robotic systems.
  • Autonomy: Advanced decision-making algorithms for collision avoidance, dynamic mission planning, and target reallocation.
To evaluate the framework, extensive simulations and real-world experiments were conducted. These tests demonstrated its effectiveness in coordinating heterogeneous swarms, achieving 100% collision avoidance success, and reducing operator workload through autonomous decision-making.
Table 1 presents a comparative analysis of key swarm robotics frameworks from academia and open-source communities. The comparison focuses on critical aspects such as ROS version, support for UAV/UGV platforms, distributed control, hardware integration, modularity, and payload management.
This overview highlights the strengths and limitations of well-known systems like Aerostack2, ROS2swarm, PX4 Swarm Controller, SwarmUS, and UAVros, placing them in context with the framework proposed in this thesis. Notably, this work stands out for its full support of heterogeneous platforms, advanced payload coordination, and field-proven real-time autonomy—features that are only partially addressed by most existing solutions.
As shown in Table 1, most existing frameworks focus on either simulation environments or homogeneous UAV systems, with limited integration of real hardware, payload-aware autonomy, or support for heterogeneous multi-domain platforms. While frameworks like Aerostack2 and ROS2swarm demonstrate strong modularity and behavior-based control, they fall short in terms of deployment-readiness and operational robustness.
Recent advances in swarm robotics have enabled the coordination of large groups of autonomous agents in air, land, and maritime domains. This multi-domain dimension adds significant complexity due to the heterogeneity of vehicle dynamics, sensor payloads, communication requirements, and operational constraints. Early research demonstrated the feasibility of inter-domain swarm communications, such as aerial–ground coordination via ZigBee mesh networks [20]. More recent efforts have explored the theoretical and architectural underpinnings of distributed swarm control across heterogeneous teams [21], including underwater swarm systems that present distinct navigation and autonomy challenges [22]. Frameworks such as Buzz [23] and SwarmUS [12] have shown how local consensus or decentralized state-sharing can enable scalable behaviors across mixed-agent teams. From a practical standpoint, ROS-based simulation toolchains such as ArduPilot-Gazebo extensions have provided accessible platforms for prototyping swarm systems [24].
To address high-level mission autonomy, modular implementations based on finite state machines (FSMs) have been demonstrated in real swarm deployments, such as leader–follower UAV networks with onboard autonomy [25]. Additionally, real-time planning approaches like the Optimal Virtual Tube (OVT) method offer scalable coordination strategies to ensure safe navigation under dynamic constraints [26]. Specific studies have also tackled hybrid operations involving UAVs and UGVs, demonstrating adaptive leadership [27] and robust cooperative landing under uncertainty [28]. Despite these advances, most existing frameworks remain domain-specific or lack fully integrated support for heterogeneous real-time swarm coordination. The present work addresses this gap by proposing a unified and modular multi-domain swarm framework based on ROS and ArduPilot, validated both in simulation and flight experiments with fixed-wing UAVs.
In contrast to previous models, the system proposed in this paper uniquely combines full-stack modularity, distributed logic, and compatibility with both aerial and ground vehicles. Its demonstrated integration with distributed communication systems, mission-specific payloads, and real flight testing positions it as a practical and scalable solution for real-world swarm operations.
The remainder of this paper is structured to reflect the progressive development, implementation, and validation of the proposed swarm framework.
Section 2 presents the full architecture of the system, beginning with a high-level overview of the hardware components—including autopilot, companion computer, communication systems, and payload interfaces—followed by an in-depth explanation of the modular software design. This section dissects the functionality of each module (e.g., state machine, mission manager, swarm manager), describes their interdependencies, and details the communication protocols and decision logic used within the swarm. It also includes the design of the centralized swarm database, which manages platform and payload metadata, and explains how mission planning and voting mechanisms operate within this architecture.
Section 3 focuses on the Ground Control Station (GCS), which acts as a swarm-integrated node with bi-directional communication capabilities. This section outlines the software stack, data flow, and interface logic used by the operator to monitor and control the swarm. It also introduces the mission command capabilities, interface layers, and safety control tools embedded within the GCS.
Section 4 describes the experimental methodology used to validate the framework, encompassing both simulation and real-world testing. It begins with a description of the Software-in-the-Loop (SITL) simulation environment used to assess the swarm’s core functionalities—such as collision avoidance, area coverage, and target reallocation—under controlled conditions. The section then presents real flight experiments with fixed-wing VTOL UAVs, including detailed mission scenarios, swarm compositions, communication configurations, and quantitative performance metrics. Both qualitative and quantitative results are analyzed, with emphasis on mission success rate, spatial coordination accuracy, and collision avoidance effectiveness.
Section 5 concludes the paper by summarizing the main contributions and highlighting the impact of this framework in multi-domain swarm operations. It also outlines potential extensions for future work, including enhancements in scalability, integration of advanced communication protocols, and the use of machine learning for adaptive mission planning and manned–unmanned teaming.

Methodology

The proposed swarm framework was conceived as a distributed robotic system in which each autonomous platform—be it aerial, ground, or surface—is modeled as an independent node within a decentralized network. Following the principles of distributed architectures [6,7], the system eliminates single points of failure and enables consensus-driven behaviors through peer-to-peer communication protocols. Vehicles are treated as agents in a fully distributed environment, capable of perceiving their state, exchanging information with others, and reacting autonomously to mission changes without requiring centralized supervision. Finite state machines (FSMs), which have proven effective in modeling autonomous multi-agent systems [9], are used at the core of the decision-making layer, ensuring deterministic, predictable transitions and enhancing system resilience in high-risk operational contexts.
In terms of real-time embedded integration, the framework adopts a dual-layer control structure: a high-level ROS-based logic running on companion computers, and a low-level control interface with ArduPilot-based autopilots via MAVLink. Key functions—such as collision avoidance, mission assignment, and inter-agent messaging—are implemented using asynchronous ROS nodes and callback-based execution, ensuring timely responses to dynamic mission events. Deterministic communication is supported through time-division multiplexing (TDM), avoiding conflicts in the mesh network and ensuring robustness, particularly under degraded communication conditions [8]. Furthermore, system diagnostics, health status, and mission states are monitored at fixed frequencies, with safety watchdogs implemented at the node level to guarantee bounded reaction times even in failure scenarios.
Scalability is addressed both architecturally and algorithmically. The modular software structure allows the seamless addition or removal of nodes (vehicles or ground stations) at runtime. All swarm behavior modules—such as mission planning, anti-collision, and swarm state monitoring—are implemented as ROS metapackages that are reusable and hardware-agnostic [12]. Algorithmically, mission planners (e.g., area, perimeter, multi-objective) and the voting-based task assignment mechanisms are designed to support variable swarm sizes efficiently. Performance is primarily bounded by communication latency rather than logic complexity, enabling the operation of dozens of agents concurrently [13,16].

2. Architecture

The purpose of our swarm system is to manage a group of autonomous vehicles, such as Unmanned Aerial Vehicles or UAVs and Unmanned Ground Vehicles or UGVs, that independently coordinate and execute a range of missions or tasks. These vehicles function as nodes in a decentralized swarm system, where no node holds a higher level than the others. Conversely, every node has the ability to operate autonomously, and in the event of a malfunction in one of them, the remaining nodes can adapt and sustain proper functionality.
In order to accomplish this, the software will manage three primary actions:
  • Facilitating the transfer and reception of data (such as telemetry and commands) among the other nodes.
  • Establishing the decision-making for several coordinated missions.
  • Executing missions based on the vehicle’s type to effectively accomplish diverse objectives.
Each vehicle in the swarm will be equipped with identical software, but will be distinguished by a unique global identity based on the vehicle model and a user identification that differs between several vehicles of the same model.

2.1. General Hardware Architecture of the System

Each vehicle in the swarm features a modular hardware configuration comprising the components presented in Figure 1.
  • Autopilot: Responsible for executing vehicle movement commands, and communicates with the companion computer via MAVLink protocol.
  • Mesh radiomodem: Establishes communication with other nodes in the swarm. Uses a decentralized ad hoc network for efficient data exchange.
  • Companion computer: Executes high-level swarm logic. Hosts the ROS-based software modules for task management and inter-node communication.
  • Payload radiomodem: In charge of sending the payload information, video, and metadata, and also for the control of the high-bandwidth payload systems.
The hardware configuration is designed to ensure redundancy and fault tolerance. In case of a node failure, the remaining vehicles adapt dynamically to maintain mission integrity.

2.2. General Software Architecture of the System

The software framework is organized into eight distinct modules, as shown in Figure 2, each addressing specific functionalities essential for swarm coordination:
  • MAVROS: Facilitates communication with the autopilot using the MAVLink protocol.
  • Vehicle Monitoring: Monitors vehicle status and security parameters, generating alerts for anomalies.
  • Swarm Manager: Detects new nodes, manages active nodes, and ensures synchronization across the swarm.
  • State Machine: Governs mission execution through three primary states: Planning, On Mission, and Emergency.
  • Mission Manager: Assigns tasks to nodes, monitors mission progress, and supports dynamic replanning.
  • Collision Avoidance: Prevents inter-vehicle collisions through altitude adjustments and route modifications.
  • Swarm Interface: Manages communication between nodes, enabling decentralized coordination.
  • Device Configuration: Handles the setup of hardware components such as autopilots and radiomodems.

2.3. Architecture of the Modules

The aforementioned modules are categorized into the subsequent metapackages:
  • Common modules that are shared across all projects in the swarm system.
  • CommsSystem consists of the communication modules, device setup, and the communication interface that connects the swarm with the radio frequency elements.
  • GroundSystem incorporates a node that is capable of receiving ADS-B signals in order to determine the positions of other aircraft.
  • SwarmSystem comprises the anti-collision, mission manager, state machines, and swarm manager packages.

2.3.1. Swarm Database

The swarm database is a centralized repository that organizes and manages all critical information related to vehicles, missions, and payloads. It ensures seamless coordination across the swarm by maintaining consistency and enabling real-time decision-making. The database is designed to balance modularity and scalability, allowing for integration with heterogeneous robotic systems.
The database structure consists of three primary categories (Systems, Swarm, and Payloads), as well as a set of common elements shared across all entries:
  • Systems: Consolidates all vehicle-related data and comprises seven tables:
    Classification system: Vehicles are categorized into three tiers: type, subtype, and model. The topmost level of the hierarchy is encompassed within the type table. For instance, it includes UAVs, UGVs, or Unmanned Surface Vehicles, as shown in Figure 3.
    System subtypes: Includes all variations or categories of systems. Each subtype is linked to a specific category of vehicle, such as UAVs or UGVs. Within the subcategories of UAVs, many types may be identified, such as fixed-wing, quadrotor, and rover.
    System model: Encompasses all system models, including fixed-wings, Vertical Take-Off and Landing (VTOL), multicopters, helicopters, and Ground Control Stations (GCS). Every model is linked to a specific vehicle subtype. Thus, a specific model is directly associated with a singular subtype, which in turn is linked to a singular type.
    System layouts: This table serves as a foundation for producing additional replicas of vehicles with identical attributes. The composition of this consists of two parameters: the specific model of the vehicle and the particular type of payload that is fitted. Thus, a system template is established, which will subsequently be utilized to generate several copies of the identical system.
    System model parameters: This table encompasses all the parameters associated with a model, including, for example, the loiter radio and the presence of a parachute.
    Systems: Each system is associated with a tangible entity. Thus, they are entities linked to system templates (system layouts). By employing this method, it is possible to build several vehicles that possess identical attributes. The table consists of a user-defined ID and template, with each generated vehicle being internally allocated a distinct ID. It is important to mention that duplicating the same user ID and layout is not allowed. It is not possible to have two models with the same name; each model must be unique.
    System parameters refer to the specific parameters that are linked to the vehicle itself, for instance, the UDEV wiring regulations and MAC address.
  • Swarm: Consolidates all the data pertaining to the swarm and is categorized into four tables:
    Missions: Includes many categories of swarm missions, including target, area, perimeter, and more.
    Mission parameters: Encompasses all the variables and criteria associated with the missions.
    Modules: Includes various swarm modules, such as the anti-collision module designed for fixed-wing vehicles and the anti-collision module specifically designed for quadcopters.
    Module parameters: Contains all the essential parameters required for the functioning of the swarm modules.
  • Payload: Consolidates all data pertinent to the payload, arranged into twelve tables:
    Payload type: A three-level framework has been established to classify payloads, analogous to the system hierarchy. The payload type pertains to the highest classification and encompasses instances such as attack, radio frequency, sound, etc.
    Payload subtype: The subtype is a hierarchical classification beneath the primary type. Loitering munition, effector, and turret are classifications of attack.
    Payload model: The model represents the ultimate tier and is contingent upon the subtype. Loitering munitions are categorized into light, medium, and heavy varieties.
    Gimbal Axis Parameters: Encompasses all parameters related to each gimbal axis (roll, tilt, and pan).
    Gimbal configurations: This table functions as a foundation for producing further replicas of gimbals with identical attributes. It consists of five parameters: baud rate, period, pan axis parameter, tilt axis parameter, and roll axis parameter.
    Sensors: Comprises many sensor types including IR, Lidar, and camera, among others.
    Sensor parameters: Includes all sensor specifications, such as horizontal field of view (HFOV), vertical field of view (VFOV), resolution, etc.
    Payload layouts: This table functions as a foundation for producing further iterations of payloads with identical attributes. It comprises two parameters: the gimbal configuration and the attack type (e.g., a two-axis gimbal + software defined radio (SDR)).
    Payloads: These are replicas of the payload layouts table. This is designed to produce many payloads with identical attributes. The table contains the user ID and its configuration, with each payload allocated a distinct internal ID. The identical user ID and layout combination cannot be duplicated.
    Payload parameters: Distinctive parameters of each payload, including interfaces, protocols, etc.
    Autodetection class: Comprises all classes that the autodetection module is capable of identifying, including individuals, vehicles, etc.
    Motor: Includes the type of motorization used in the gimbal, servos, brushless, etc.
  • Common: Consolidates all the shared data from prior models and comprises six tables:
    Layout of Onboard Computer: A table comprising various sorts of companion computers, such as Raspberry Pi, NVIDIA Jetson.
    Communication systems: Includes the IP addresses used for establishing communication with the microhard.
    Corrections: This table consists of four columns. The initial column represents the bug ID, the second column indicates the module from which the bug is reported, the third column provides a description of the bug problem, and the final column denotes the level of criticality. The ground interface visually represents faults using different colors according to their level of criticality.
    PID controllers: This table displays the numerical values of the PID controllers for the gimbal. These terms consist of the proportional, derivative, and integral components. Additionally, there is the maximum cumulative error of the integral term and a Boolean variable for anti-windup.

2.3.2. Vehicle Monitoring

Every swarm vehicle produces a substantial quantity of data via its telemetry port. MAVROS is a specialized protocol that combines the terms MAVlink and ROS. The autopilot utilizes MAVLink as its communication protocol. Due to limitations in the mesh communications network, it is unable to transmit all the required information for each vehicle. As a result, a dedicated message called VehicleStatus was developed. This message encompasses all the essential facts required for the functioning of the system and the process of making informed decisions.
After the class is constructed, its main tasks include populating the VehicleStatus status message and monitoring several security parameters. If any of these parameters deviate from their expected values, the class generates an alert.

2.3.3. Swarm Manager

The swarm manager module, in Figure 4, is the highest-level module. This module groups information from all the modules and creates the VehicleInfo type of message. This structure encompasses data collected from the vehicle monitoring and state machine components. The VehicleInfo message is transmitted to the Ground Control Station (GCS) and other vehicles via the swarm interface module. Error messages are also conveyed using this message.
The main tasks of the swarm manager are as follows:
  • Creates the VehicleInfo message from the SwarmStatus, UAVExtraInfo, and VehicleStatus messages.
  • Manages the list of vehicles and connected GCSs, controlling which of them is active.
  • Checks if it is correct to send a mission to the state machine. The mission will be rejected in the following cases:
    Home position is not defined;
    The vehicle status is not ready;
    When the mission does not affect the vehicle, because the vehicle is not in the fleet or on the mission list.

2.3.4. State Machine

Figure 5 depicts a simplified representation of the finite state machine (FSM) used to govern the mission execution logic. The FSM is implemented programmatically with three primary states (Planning, On Mission, Emergency) and transitions triggered by specific actions or events. Although the figure adopts a flowchart-style layout for clarity, the underlying system is a deterministic FSM operating at 5 Hz, ensuring state consistency and robust transition handling. The state machine comprises two classes:
  • The StateMachine, where the transitions take place.
  • The CommandManager, which receives missions from the swarm_manager module and activates operations that facilitate transitions between states.
The process of vehicle decision-making consists of the following stages:
  • STATE_PLAN: In this state, the mission is assigned to the mission module under the requirement that all vehicles in the newly formed fleet are in an identical state. Upon reaching the specified condition, it is necessary to have a confirmation message to execute a state transition. In the event of inadequate planning, an alert is triggered and the vehicle is removed from the fleet, returning to its original place. To avoid the state machine from becoming unresponsive, an event is generated when it remains in that state for an extended period. A timer triggers this event and instructs the vehicle to cease floating and return to its home location.
  • STATE_GO_MISSION: This state refers to the vehicle actively carrying out its assigned task. A case study is as follows:
    The vehicle monitoring system receives an alert instructing it to switch to emergency action.
    Upon receiving a mission complete ack, the vehicle remains in the fleet and returns to its base.
    A landing ack is received, indicating that the state machine has been deactivated.
    An alarm is received from other modules, indicating that the vehicle has departed from the fleet and returned to the base.
  • STATE_EMERGENCY: In the event of an alert being received from the vehicle monitoring module, an emergency protocol is enacted. This module directs the essential actions to be executed for the specified emergency.
State transitions occur via activities. The actions are as follows:
  • ACTION_NONE: The action that is automatically taken as the default option. This action does not result in any alteration of the current state.
  • ACTION_READY: Upon system initialization, the state machine module must await confirmation from the vehicle monitoring module that the vehicle has been started correctly. Upon occurrence of this event, the action results in a transition to the STATE_GO_MISSION.
  • ACTION_NEW: This action transitions the state to STATE_PLAN.
  • ACTION_REPLAN: This action transitions the state to STATE_PLAN.
  • ACTION_EMERGENCY: This action transitions the state to STATE_EMERGENCY.
Figure 5 illustrates the operational characteristics of the state machine executed by each individual. The loop operates at a frequency of 5 Hz, and it is initiated when the ROS system is launched and the parameter indicating activity is set to true. Conversely, if a base has not been defined, there will be no transitions. If any of these conditions are not met, the state machine will be deactivated. As previously elucidated, there are three states, each of which executes a sequence of duties. After the completion of these activities, the transition between states takes place.
Figure 6 displays the STATE_PLAN switch. An emergency alert triggers the action to switch to ACTION_EMERGENCY. The state machine module sends an error and initiates a return home mission either due to a significant time lapse since the activation of the STATE_PLAN or upon receiving an alert for inadequate preparation. Prior to this occurrence, all vehicles listed for the mission, such as those belonging to the same fleet as the vehicle in question, will remain in the STATE_PLAN state. Once this condition is met, the mission will be made available. Subsequently, the mission management module will transmit an acknowledgment to confirm successful planning, or issue a warning in the event of an error.
If an alert is received when the system is in the STATE_GO_MISSION state, as in Figure 7, the action is switched to ACTION_EMERGENCY, as can be seen in Figure 8. Alternatively, one of the following outcomes would occur:
  • Mission Complete ACK: The necessary action has been taken regarding ACTION_NEW and a mission has been assigned to each fleet identifier for maintenance.
  • Landing Acknowledgement Completed: A command to shut down and a confirmation of mission completion are delivered.
  • Mission alert received: The action has been updated to ACTION_NEW, and a mission to return home is given, abandoning the existing fleet.
During the emergency state, as depicted in the figure, the module executes the emergency plan and continues to do so until the shutdown is initiated.

2.3.5. Mission Manager

The mission manager module serves the goal of strategizing and coordinating all missions received from the state machine, and subsequently directing these missions to the autopilot system, as can be seen in Figure 9. Furthermore, this module oversees the progress of the task, providing information on the precise percentage of completion. This monitoring enables the module to regulate the speed and, in the event of replanning, inform the selected vehicle assigned to carry out the replan mission that a portion of the operation remains unfinished.
The mission manager utilizes a loop function that employs a switch case to transition between different states as long as the mission remains active. The states are as follows:
  • MISSION_STATE_PLANNING: This state involves the computation of mission routes based on the number of vehicles needed for the mission. The assignment of each route to each aircraft is determined through a voting process, assuming the planning is accurate.
  • MISSION_STATE_VOTING: After all votes are received, they are tallied and the designated route is transmitted to the autopilot.
  • MISSION_STATE_PREMISSION: This state specifically applies to missions that necessitate permission. This implies that a Dubins path must be computed for each vehicle, and once transmitted to the autopilot, an airspeed control is executed for each Dubins path.
  • MISSION_STATE_MISSION: Sends the calculated mission in the planning state and performs airspeed control until the mission is finished.
  • MISSION_STATE_PARACHUTE: When the vehicle is equipped with a parachute, this protocol is activated to initiate the opening process.
  • MISSION_STATE_SENDING_MISSION: This state specifically impacts missions of the swap type. The revised plan for the mission is transmitted to a different fleet.
  • MISSION_STATE_WAITING_MISSION: This state specifically impacts missions involving the swapping of items. The assignment is patiently awaited.
Each individual case depicted in the figure above is presented below.

2.3.6. Planning State

The initial phase conducted upon receiving a mission is the planning stage, as shown in Figure 10. Initially, the mission’s routes are determined based on the allocation of vehicles for that task. In the event of a failure in the calculation, an alert is dispatched to the state machine and the mission pointer is reset. It is necessary to receive at least one information message from the other vehicles. Upon receipt, every vehicle is allocated a route based on the chosen criteria such as distance, battery level, and so on. The status is updated to VOTING.
(a)
Voting state
During the VOTING phase, all vehicles are required to submit their votes, which are then tallied and assigned to the chosen route. Once the assignment is deemed accurate, the chosen path is transmitted to the autopilot system. Alternatively, if a condition is not met, an alert is displayed and the task is removed, as can be seen in Figure 11.
(b)
Pre-mission state
If a mission includes a pre-mission phase, as in Figure 12, involving the calculation of Dubins paths, the speed control is executed only if the pre-mission is complete. In the event that collision avoidance is activated, a new pre-mission is calculated to account for potential route crossings within a specified time frame. Once a safe pre-mission has been determined, it is transmitted to the autopilot system.
(c)
Mission state
In the MISSION state, as in Figure 13, if the mission has not been completed, speed control is executed to perform it at the same speed. When the mission has finished, a completion ack is sent, except when it is the landing mission for UAVs with parachutes, in which case the status is changed to parachute.
(d)
Replan state
During the REPLAN state, as in Figure 14, a fresh mission is computed by taking into account the completion percentage of the previous mission from the vehicle list. Subsequently, the new fleet list for the mission is published, replacing the old list. Subsequently, the task is successfully completed.
(e)
Sending mission state
This case, shown in Figure 15, only occurs when a swap type share is received and one of the fleet with which an exchange is planned becomes empty. First, it checks whether the mission replanning has already been sent. If it has not been sent, the replanning of the mission is calculated and sent to the fleet with which an exchange is planned so that the other vehicles continue the mission. Once the replanning has been sent, in the next iteration of the loop, whether both fleets have become empty is checked, and, if yes, the waiting mission state is applied. If both fleets are not empty, a replan is published and the mission being carried out is changed to an “add” to the other fleet.
(f)
Waiting mission state
This condition will remain in a continuous cycle until an individual unit is assigned a specific task. Upon receiving the mission, a check is conducted to verify if both fleets are devoid of any resources. If this condition is met, the process of recalculating and transmitting the revised mission plan to the desired fleet for exchange is initiated, as can be seen in Figure 16.
(g)
Mission class
This class, in Figure 17, serves as the fundamental template for all missions. Certain functions inside this class are designated as virtual, enabling derived classes to override them. A mission consists of two distinct phases: the initial phase, known as pre-mission, involves navigating from the current place to the mission’s entrance point via Dubins routes. The subsequent part is referred to as the mission phase. Pre-mission is not required in certain scenarios, such as when the aircraft is not of the fixed-wing type, when there is only one vehicle involved, or when the Dubins route is not essential for the mission. Displayed below is the diagram illustrating the inheritances of the mission class.
Within the system, fundamental missions are present, encompassing dedicated planners and mission actions aiming to enhancing capabilities and operational flexibility. Regarding missions, we observe the following:
Area: Supervise a region using many Unmanned Aerial Vehicles (UAVs) in a synchronized fashion.
Target: Supervise the trajectory of several Unmanned Aerial Vehicles (UAVs) in a synchronized way.
Perimeter surveillance: Aims to maximize the duration of overflight at each point.
Objective: Monitoring a certain location using Unmanned Aerial Vehicles (UAVs) that are evenly distributed.
Autoland: Swarm Airport Operations
Manned–Unmanned Teaming: Refers to the collaboration and coordination between autonomous or remotely controlled systems. An explanation for this will be provided in a dedicated section as it forms a fundamental aspect of the system.
The actions section contains the following:
Add: Integrate an Unmanned Aerial Vehicle (UAV) into a fleet of missions.
Swap: Transfer materials across mission fleets.
Collision avoidance: The purpose of the collision avoidance module is to proactively respond to potential collisions with other vehicles in the swarm or when they are at a distance that poses a risk. To prevent collisions, the primary responsive measure is to adjust the altitude, minimizing the need for significant alterations to the flight paths.
Deliberative module: Perform decision-making using deterministic metrics, such as distance and battery level. The cars that achieve the greatest score will be the ones that perform the assignment.
Missions may be dispatched to a designated set of vehicles (chosen individually) or by specifying a quantity from the total. Voting facilitates the latter scenario. A score is determined by specific measures, and the vehicles with the greatest scores execute the mission.
The voting procedure, in Figure 18, commences upon the receipt of a fleet mission message, provided that specific parameters are non-zero (value ≠ 0). The parameters are as follows:
  • Vehicle quantity: The quantity of vehicles designated to exit the fleet.
  • Payload model: The payload model required to execute the mission. By default, this is set to “none”. This parameter will be populated by the ground operator or automatically as necessary.
Upon receipt of the mission, the voting procedure commences. Each vehicle is tasked with computing the score for every vehicle within its fleet. The score ranges from one (the maximum score) to zero (the minimum score). The score is computed as follows:
s c o r e   =   1     w p v p     w t v t     w b ( v b )
where “w” represents the weight and “v” denotes the acquired score. The total of all weights must equal one, and the score can vary from zero (the optimal score) to one (the suboptimal value).
The subsequent parameters utilized for calculating the total score and the methodology for computing their respective scores are outlined below:
  • Payload (p): This denotes the category of payload affixed to the vehicle. The weight is 0.6, and the score is computed as follows:
    If the installed payload type corresponds with the model mandated by the mission, it receives a score of 0.
    If it corresponds to the payload subtype mandated by the mission, it receives a score of 0.5.
    If it corresponds to the payload type necessary for the mission, it receives a score of 0.75.
    If there is no match, it receives a score of 1, denoting the lowest possible score.
  • Time (t): This denotes the projected time required for the vehicle to arrive at the mission. The weight is 0.2, and the score is computed as follows:
    Initially, the reference point is assessed. This aspect will be contingent upon the nature of the mission:
    For MUM-T missions (terrestrial or aerial), the reference point will be the primary vehicle to which additional units will be appended.
    For missions using area or perimeter, the reference point will be the centroid of the polygon.
    For additional missions, the reference point will be the initiation point of each mission.
    The present distance from the vehicle to the reference location is computed and divided by its cruising velocity. Upon determining the predicted arrival time, the score is computed as atanh (time × 0.01).
  • Battery (b): This denotes the residual battery percentage in the car. The weight is 0.3, and the score adheres to a linear equation in which 0 signifies 100% battery and 1 signifies 0% battery.
Upon tallying the votes, the requisite number of vehicles with the highest scores depart from their existing fleet, while the remaining vehicles reorganize to persist in their current mission.
The proposed swarm framework integrates a suite of modular planning and safety algorithms to support scalable, autonomous coordination in heterogeneous UAV operations. These algorithms operate as independent ROS nodes, interfacing with the mission manager and swarm database in real time to manage complexity, react to environmental constraints, and enforce safety. A brief overview of each algorithm is presented below.
  • The Collision Avoidance Module implements a fully decentralized conflict resolution strategy based on kinematic extrapolation of UAV trajectories. Each agent computes time-to-collision (TTC) against nearby UAVs using relative velocities and applies an upward-only altitude maneuver when separation thresholds are violated. Conflicts are resolved deterministically using a prioritization rule based on current altitude and UAV ID, with built-in hysteresis and safety constraints that prevent oscillations and ensure altitude restoration after conflict clearance.
  • The Area Mission Planner decomposes arbitrary polygonal regions into a set of sweep lanes using a rotation-optimized boustrophedon strategy. Each UAV is assigned specific lanes based on its altitude, field-of-view, and the required image overlap, enabling cooperative, non-redundant coverage. Replanning support is built-in, allowing dynamic reassignment of subregions in case of UAV failure or reallocation.
  • The Target Planner coordinates the spatial distribution of UAVs around a designated point, creating symmetric or asymmetric formations adapted to both rotary-wing (hover-capable) and fixed-wing (fly-through) platforms. The planner supports 3D structuring through configurable altitude steps and angle spacing, and can integrate the initial UAV positions to optimize arrival vectors. It is particularly well-suited to surveillance, loitering, or synchronized engagement scenarios.
  • The Perimeter Mission Planner generates cyclic patrol paths along closed polygons or circular boundaries, assigning phase-offset starting points to each UAV. Fixed-wing aircraft receive smoothed paths respecting minimum turn radii, while multirotors benefit from tighter maneuvering and responsive yaw adjustments. The planner ensures loop continuity, handles UAV insertion/removal, and maintains consistent spacing through time- or angle-based offsets.
  • The No-Fly Zone (NFZ) Manager provides dynamic geofence handling by detecting intersections between mission routes and evolving airspace restrictions. Upon detecting a violation, it attempts either vertical avoidance (via altitude change) or lateral rerouting (via path extension and polygonal inflation). When no safe path exists within UAV performance constraints, the system triggers a full mission replan and ensures swarm-wide consistency.
Due to the level of algorithmic complexity, platform-specific adaptations, and inter-module coordination required, only this summary is presented in the main manuscript. A detailed technical specification—including pseudocode, decision logic, configuration parameters, and illustrative diagrams—is provided in the Supplementary Material (File S1).

3. Ground Control Station

The Ground Control Station (GCS), developed in QML, is a software application installed on a desktop computer that allows operators to issue commands and monitor the activities of the swarm. It functions as an additional node within the swarm, establishing bi-directional communication with all components for transmitting missions and commands, and receiving information from vehicles. This section details its system description, architecture, communication framework, mission command capabilities, and operational interface.

3.1. System Description

As mentioned, the GCS (Ground Control Station) provides the swarm operator with two main functions:
  • Monitoring the states of the different vehicles;
  • Interacting with the vehicles by sending different missions or actions.
Communication with the various nodes in the system is carried out using a radio frequency device that is capable of sending and receiving data to/from the other nodes simultaneously. In order for all vehicles to transmit data and for the data to be readable by the others, the transmission time is divided equally between all the nodes in the system. For example, to achieve a frequency of 5 Hz, the 200 ms of each iteration are divided between each node. If there are four nodes, each one would have 50 ms for transmission, and the rest of the time would be spent listening to information from the other nodes.
The application was developed using the Qt-QML language for the graphical interface, which allows the use of C++ to develop low-level functionalities and ensure proper communication between both languages.
Communication with the other nodes uses the same module as the one used in the different vehicles, the swarm_interface module, which receives data via radio frequency and publishes it to a series of topics via ROS. To make use of this, a module was developed within the application that integrates ROS and connects to the various topics from the swarm_interface.

3.2. System Architecture

The application is segmented into distinct tiers, as can be seen in Figure 19. The core layer is responsible for initiating the graphical functions of QML and establishing connections between the primary modules.
  • The DBHandler module is responsible for retrieving the required information from the database to be used in other modules.
  • The DataManager module stores the information of various swarm systems and comprises three submodules:
    MultiVehicleManager contains the data related to the various vehicles in the system.
    The MissionManager is tasked with storing the data pertaining to all ongoing missions at any given moment.
    The PayloadsManager is tasked with managing the various payment loads within the system. The payment charges will transmit the information to the Payload Control Station, which will thereafter relay specific information to the Ground Control Station using a ROS link.
  • CommsManager is tasked with initializing the ROS node to accept information from the various systems in the swarm.
  • CommandManager is tasked with the creation and management of various messages that are intended to be sent to vehicles, primarily missions or commands.
  • The ToolboxManager is tasked with integrating utility modules into the program.
  • The InfoRecoveryManager module is responsible for saving the current status of the system’s nodes upon request, as well as loading various missions from a file. This is done in preparation for sending the swarm.

3.3. Communication

The GCS interfaces with each node in the system through a radio frequency device, establishing an ad hoc network, as shown in Figure 20. The Xtend RF module is utilized to transmit at a frequency of 900 MHz. This device interfaces with the processor using USB, and is identified by the operating system as a serial device. Consequently, interaction with it is conducted utilizing any serial communication library.
All nodes in the system are capable of both receiving and transmitting data to one another, as illustrated in Figure 20. Furthermore, the connection is not point-to-point; hence, the data transmitted by one node will be received by the entire swarm. Consequently, it is imperative to regulate the time allocated to each node to facilitate data transmission and guarantee accurate receipt without message overlap.
The time-division multiplexing (TDM) protocol is employed to manage this. This protocol segments data from each source into time slots, transmitting one time slot from each source consecutively in a cyclic fashion. Consequently, if each node in the system is required to transmit data at a frequency of 5 Hz, there will be a cycle every 200 ms, during which all nodes must be capable of sending their information. Consequently, the 200 ms will be apportioned equally across all nodes, with each node permitted to transmit information solely during its designated interval, as shown in Figure 21, in which a full one-second communication window is represented. In this specific configuration involving four nodes, each node is allocated a dedicated 50 ms time slot within a recurring 200 ms interval. This time-division scheme ensures that all nodes can transmit without collision during each cycle. Consequently, each node owns a designated time window, measured in milliseconds, during which it may initiate the transmission of information. To prevent interference between nodes, the timing of each system must be synchronized. This necessitates an external time synchronization source for reference, specifically utilizing GPS time.
If any node in the system ceases transmission, indicated by a lack of received packets over a specified duration, or if a new node is identified joining the system, the time slot for every node, both existing and new, is recalibrated. The time allocation for each node is tailored to utilize all available time while preventing message conflicts across various devices.

3.4. Mission Command

The primary duty of the Ground Control Station (GCS), in addition to presenting information from various vehicles, is to engage with the swarm. The operator can execute several activities, including issuing particular commands for calibration or directing each vehicle to undertake distinct objectives in collaboration with the swarm.
This section will delineate all functionalities accessible from the Ground Control Station. Initially, it will elucidate the management of information flow from the application to the platforms. Secondly, it will delineate the procedures necessary for system initialization. Finally, it will delineate the commands and tasks that can be transmitted to the swarm.

3.5. System Operation

Figure 22 presents the primary interface upon initiation of the GCS, enabling control of the swarm. The many nodes of the system are displayed on the map as they establish connections to the network. Each platform is represented by a distinct icon, and its location is automatically adjusted according to the GPS data received.
In order to start commanding the swarm, the operator must first access the Fleet Command interface through the control menu in Figure 23.
This initiates the interactive phase of mission assignment and swarm coordination.
After that, the status of the Base will appear, as shown in Figure 24.

3.5.1. Interface Description

The main interface layout is shown in Figure 25. The buttons in the upper-left menu, are the fast actions menu, as shown in Figure 26, and provide the following functions:
  • Fleet Commands: Toggles the visibility of the command menu on the right-hand side.
  • Emergency Landing: Commences the emergency landing procedure if the landing route has been specified; otherwise, it requests the entry of the route.
  • Abort Landing: Implements the previously described “Cancel Landing” instruction.
  • Parachute All: Engages the parachutes on all systems equipped with such devices.
The lateral button bar is the Fleet Command; these buttons display the commands for establishing the Base, missions, various actions, and flight zones.
In the Base menu, it is possible to modify the Base using the Set base option or obtain information regarding the existing base with the Info option. The Missions menu allows the user to manage several missions, which will be elaborated upon hereafter.
The Actions menu offers the following options:
  • Incorporate into mission;
  • Exchange missions;
  • Adjust calibration;
  • Dispatch more orders.
Ultimately, the final menu presents options to restrict the operational region of the vehicles.

3.5.2. Vehicle Selection

To dispatch missions and directives, it is essential to first identify the vehicles designated to execute the activity. Upon indicating the procedure to be performed, the subsequent vehicle selection choice appears on the left.
The subsequent alternatives are as follows:
  • Select UAV: Enables the selection of each vehicle by sequentially clicking on the icons displayed on the map.
  • Select Fleet: Selects all vehicles within the same fleet by clicking on the icon of any one vehicle.
  • Select By ID: A menu will provide a list of all vehicles, enabling you to select your desired vehicle.
  • Select All: Identifies all vehicles currently eligible.

3.5.3. Vehicle Information

The list of active vehicles, as shown in Figure 27, is located in the bottom left-hand corner.
Each button provides the following information:
  • Vehicle designation;
  • Packets received in the preceding second;
  • Visibility status: Clicking the eye icon allows the user to conceal or reveal the vehicle icon on the map;
  • Button to focus the map on the chosen vehicle;
  • Fleet color border.
Furthermore, clicking one of the buttons will display the vehicle’s primary flight window (designated as the PFD, Primary Flight Display) above this list, as shown in Figure 28.
This window displays the following information:
  • Heading, pitch, and roll of the vehicle.
  • The left column denotes the speed over the ground (Ground Speed) and speed in relation to the wind (Airspeed).
  • The right column denotes the elevation above the ground.
  • The vehicle’s name and base information are provided below the speeds.
  • The remaining battery and GPS status are displayed below the height.
  • The various sensors are displayed to the left of the battery, indicating whether the calibration has been successful.
  • The Trajectory button toggles the visibility of the vehicle’s trajectory on the map within a temporary window.

4. Experiments and Simulation

This section evaluates the swarm system through simulated and real-world experiments, demonstrating key functionalities such as collision avoidance, autonomous decision-making, mission planning, and target sharing. These trials assessed the system’s adaptability in dynamic and complex environments, with a focus on both homogeneous and heterogeneous UAV configurations.
The experiments sought to evaluate and confirm the following swarm functionalities:
  • Collision Avoidance: Safeguarding UAV operations through the detection and prevention of probable collisions. Algorithms utilized altitude modifications and route reconfigurations to ensure safety, especially for fixed-wing UAVs that do not possess hovering skills.
  • Autonomous Decision-Making: UAVs assigned mission-specific responsibilities (e.g., which unit should engage a certain target) depending on criteria such as proximity, energy levels, payload capacity, and mission priority.
  • Advanced Planning Modules:
    Area Planner: Allocates exploration zones among UAVs for optimal coverage.
    Route Planner: Engineers optimum pathways to reduce energy expenditure and duration.
    Objective Planner: Assigns UAVs to one or several high-priority objectives.
    Perimeter Planner: Orchestrates UAVs to carry out surveillance and safeguard Ok, revised designated perimeters.
    Multi-Objective Planner: Coordinates UAVs to efficiently manage numerous simultaneous objectives.
  • Target Reallocation: Immediate and dynamic reassignment of mission targets among UAVs to accommodate changing environmental circumstances and mission priorities.
  • Self-Diagnostics: UAVs evaluate their operational conditions, encompassing battery levels, sensor performance, and overall mission preparedness. Diagnostic data is disseminated to the swarm to improve decision-making.

Experimental Configuration

Each experiment exhibited several UAV configurations, demonstrating the swarming system’s adaptability in managing both homogeneous and heterogeneous arrangements. These arrangements presented distinct obstacles, requiring the swarm to utilize various tactics for coordination and execution. The results are presented in two parts: Experiment 1, carried out entirely in the simulator, and Experiment 2, which is a real flight test with three UAVs.
The implementation was developed using ROS Noetic (ROS 1), compiled with C++14, and interfaced with ArduPilot Plane 4.2 using MAVLink through MAVROS. Simulation was performed using SITL with Gazebo, and onboard modules were validated with real UAVs running compatible firmware.
Experiment 1: Uniform Configuration with Six Fixed-Wing VTOL Unmanned Aerial Vehicles. The simulation campaign was conducted using a Software-in-the-Loop (SITL) setup with Gazebo and ArduPilot. Scenarios included the following:
  • Area surveillance;
  • Perimeter coverage;
  • Route following with obstacle zones.
  • Scenario: A comprehensive monitoring operation segmented into various zones using different mission planners.
  • Objectives:
    Assess collision avoidance and spatial planning functionalities in a homogeneous fixed-wing UAV swarm.
  • Execution:
    UAVs will execute four distinct missions: targeting, area coverage, perimeter surveillance, and route navigation, incorporating various autonomous replanning and re-tasking capabilities.
    The dynamic reassignment of missions guarantees comprehensive coverage while reducing overlaps.
    Collision avoidance systems regulate UAV proximity within restricted airspace.
  • Parameters:
    Swarm Composition:
    • Six fixed-wing Albatross 250 VTOL UAVs simulated in a ROS/Gazebo environment.
    • Aircraft modeled on a reference platform with the following characteristics:
      Wingspan: 2500 mm
      Length: 1260 mm
      Cruise speed: 26 m/s @ 12.5 kg
      Stall speed: 15.5 m/s @ 12.5 kg
      Maximum flight altitude: 4800 m
      Wind resistance:
      Fixed-wing mode: 10.8–13.8 m/s
      VTOL mode: 5.5–7.9 m/s
      Payload:
      Two-axis gyrostabilized 1.5Mpx camera
      Gyrostabilization in roll and pith
    Simulation Platform:
    • Executed on a Lenovo Legion 5 16IRX8 laptop:
      Intel Core i7-13700HX
      32 GB DDR5 RAM
      NVIDIA GeForce RTX 4070 (8 GB)
    • Real-time, high-fidelity simulation of multi-agent aerial swarm scenarios.
    Architecture Emulation:
    • Ground Control Station (GCS) and Platform Control System (PCS) implemented as independent ROS nodes.
    • Communication performed over a simulated mesh network with the following:
      Variable latency;
      Emulated packet loss;
      Signal degradation.
    • Designed to mimic realistic communication environments.
Figure 29 and Figure 30 delineate the four primary tasks that the swarm will undertake: target surveillance, area coverage, perimeter surveillance, and route navigation. Each mission integrates sophisticated functionalities for autonomous replanning and re-tasking, allowing UAVs to adjust to evolving circumstances and enhance the fulfilment of their operational goals.
In Figure 31, individual trajectories can be observed.
Figure 32 is presented in a lateral view to better visualize the altitude changes resulting from collision avoidance maneuvers. This perspective highlights how the UAVs dynamically adjust their height to maintain safe operations and prevent potential conflicts.
The following images present the swarm ground station operating under different mission scenarios.
1.
Execution of an area with multiple UAVs: In the first phase, the initial planning and mission execution are shown using six UAVs. Here, the unmanned aerial vehicles work together to cover a designated area, shown in Figure 33, fulfilling exploration or surveillance objectives.
2.
Replanning after detecting a target: Once a relevant target is identified, the system automatically performs dynamic replanning, as shown in Figure 34, assigning the necessary UAVs to investigate or engage with the target. This demonstrates the swarm’s ability to adapt in real time to unforeseen events.
3.
Replanning to maintain area coverage: Finally, in Figure 35, the remaining UAVs that were not assigned to the target are strategically redistributed to continue covering the remaining area efficiently, ensuring the planned surveillance or exploration is maintained.
These graphs illustrate how the ground station system enables the autonomous control and coordination of multiple UAVs, optimizing mission execution in dynamic and evolving environments.
Figure 36 shows the swarm PCS for shutdown and monitoring, showcasing several visual tracking processes of prospective targets. Furthermore, it demonstrates the targeting and georeferencing of these objectives, facilitating the dissemination of their information to the other components of the swarm.
This capability enables the system to independently assess and determine suitable actions concerning these targets, illustrating the amalgamation of real-time data processing and autonomous decision-making within the swarm system.
  • Results:
    Collision avoidance attained a 100% success rate, without any collision during the test.
    Autonomous decision-making inside the swarm reduced the operator’s workload, allowing the human-operated aircraft to focus on overarching mission objectives.
    The UAV swarm achieved a high accuracy rate in task execution, efficiently handling both exploration and target engagement.
To validate the robustness of the collision avoidance mechanism implemented in the proposed swarming framework, a comprehensive analysis was conducted in this outdoor simulated environment with six fixed-wing VTOL UAVs. While, as explained before, the global objective of the experiment was to assess the performance of cooperative task execution in heterogeneous missions (targeting, area surveillance, perimeter monitoring, and navigation), this section focuses exclusively on the spatiotemporal separation achieved by the autonomous agents.
The UAV trajectories were extracted from simulation logs and processed to obtain synchronized positional data over time. For each UAV pair, the Euclidean distance in 3D space was computed at every time step. Three critical distance thresholds were defined to classify interaction severity:
  • Risk: distance between 10 and 40 m;
  • Danger: distance between 5 and 10 m;
  • Near-collision: distance below 5 m.
These thresholds can be adapted to reflect different safety margins based on aircraft dynamics or airspace regulations.
The results, summarized in Table 2, reveal that all UAV pairs maintained safe separation throughout the mission. A total of 642 risk events and 4 danger events were identified across all UAV pairs, which represents approximately 0.46% of the total simulation pair-time. No near-collision events were observed during the entire execution.
The closest separation recorded was 8.09 m between UAV 4 and UAV 5 at t = 92.97 s, which falls within the danger threshold but did not escalate into a near-collision. Several other pairs came within the risk band, such as UAV 3–5 (10.21 m), UAV 1–2 (12.29 m), UAV 3–4 (13.86 m), and UAV 2–6 (24.56 m). The majority of UAV pairs maintained distances well above 40 m throughout the experiment.
The risk event distribution is visualized in Figure 37, which presents a time-aligned event map for all UAV pairs, color-coded by severity. The timeline shows that high-density interaction phases occurred mainly during dynamic task reassignment, yet the collision avoidance system effectively preserved safe separation. A complementary plot of continuous distance evolution between all UAV pairs is provided in Figure 38.
These findings confirm the effectiveness of the implemented decentralized collision avoidance logic under dynamically evolving mission scenarios. The absence of critical proximity events—even during high-density re-tasking and overlapping trajectory phases—highlights the system’s capacity to maintain safety without centralized coordination. This capability is particularly relevant for scalable swarm deployments in cluttered or adversarial environments.
Experiment 2 or Real experiment: Real experiments with three VTOL UAVs, in Figure 39 and Figure 40. Three physical VTOL UAVs were deployed in a controlled open-air environment. The missions were executed autonomously and monitored via the GCS, which issued mission plans and received telemetry and payload data in real time. Target locations and mission types were varied during the experiment to trigger re-tasking and re planning logic.
  • Scenario:
    Three VTOL UAVs performing different surveillance missions.
  • Objectives:
    Assess collision avoidance and spatial planning functionalities in a homogeneous fixed-wing UAV swarm.
    Evaluate the swarm’s capacity to respond to directives from the Ground Control Station and Payload Control Station while preserving autonomous decision-making for subordinate tasks.
  • Execution:
    UAVs will execute four distinct missions: targeting, area coverage, perimeter surveillance, and route navigation, incorporating various autonomous replanning and re-tasking capabilities.
    The dynamic reassignment of missions guarantees comprehensive coverage while reducing overlaps.
    Collision avoidance systems regulate UAV proximity within restricted airspace.
  • Parameters:
    Swarm Composition:
    • Three fixed-wing Albatross 250 VTOL UAVs:
      Wingspan: 2500 mm
      Length: 1260 mm
      Material: Carbon fiber
      Empty weight (no battery): 7.0 kg
      Maximum take-off weight: 15.0 kg (Experiment MTOW used)
      Cruise speed: 26 m/s @ 12.5 kg
      Stall speed: 15.5 m/s @ 12.5 kg
      Maximum flight altitude: 4800 m
      Wind resistance:
      Fixed-wing mode: 10.8–13.8 m/s
      VTOL mode: 5.5–7.9 m/s
      Operating voltage: 12 S
      Temperature range: –20 °C to 45 °C
    GCS:
    • Executed on a Lenovo Legion 5 16IRX8 laptop:
      Intel Core i7-13700HX
      32 GB DDR5 RAM
      NVIDIA GeForce RTX 4070 (8 GB)
    • Communications:
      XTEND 900 mhz 1 W for mesh communications
      Antenna omni 5 Dbi for airplane
      Antenna omni 11 Dbi for ground
Figure 41 and Figure 42 illustrate the three primary tasks that the swarm will undertake: target surveillance, area coverage, and route surveillance. Each mission integrates sophisticated functionalities for autonomous replanning and re-tasking, allowing UAVs to adjust to evolving circumstances and enhance the fulfilment of their operational objectives.
In Figure 43, individual trajectories can be observed. The graphs are displayed in a lateral perspective to enhance the visualization of altitude variations due to collision avoidance operations.
This viewpoint emphasizes how UAVs adapt their altitude to provide safe operations and avert potential collisions, as can be seen in Figure 44.
The following figures present the swarm Ground Control Station operating under different mission scenarios. These images demonstrate the operation of a swarm of three VTOL UAVs during a surveillance mission. In Figure 45, the three UAVs are shown carrying out an observation mission over a static target. They are strategically positioned at three equidistant points around the target to ensure comprehensive coverage.
This image depicts the swarm of three UAVs conducting a route mission. The routes are visually represented with painted paths calculated using the Dubins Path algorithm in Figure 46. These paths are optimized to ensure that all three UAVs arrive at the same designated coordinate simultaneously. The synchronization of their arrival highlights the precision and efficiency of the route planning for the swarm.
Figure 47 shows the swarm system featuring the calculated routes onboard the UAVs for a surveillance route mission. The interface highlights the paths determined for each UAV, showcasing real-time optimization and coordination of the mission directly managed by the swarm system. The visualization emphasizes the precision and autonomy of the system in executing surveillance tasks efficiently.
Figure 48 illustrates a swarm of three UAVs that have performed a re-tasking operation. One of the UAVs has been redirected to a fixed target, while the other two UAVs continue with their route mission. The remaining UAVs in the route mission have replanned their paths onboard to adapt to the updated mission parameters, ensuring seamless continuity and coordination within the swarm. This demonstrates the system’s adaptability and real-time decision-making capabilities.
Figure 49 depicts a formation of three UAVs executing an aerial surveillance operation. Each UAV has independently determined its route onboard, guaranteeing optimal monitoring area coverage. The swarm exhibits sophisticated coordination, with all UAVs independently communicating and collaborating to complete the mission effortlessly. This underscores their capacity to function as an intelligent and self-organizing system.
  • Results:
    Autonomous decision-making within the swarm diminished the operator’s workload, enabling the human-operated aircraft to concentrate on high-level mission goals.
    The UAV swarm attained a high accuracy rate in job fulfilment, effectively managing both exploration and target engagement.

Key Findings and Principal Conclusions

1.
Collision Avoidance:
  • Fixed-wing UAVs effectively adjusted altitudes to prevent collisions.
  • Multirotors demonstrated precise avoidance with dynamic hovering and shifting.
2.
Decision-Making and Target Reallocation:
  • Algorithms reduced operator workload by enabling task reassignment based on proximity, capabilities, and readiness.
  • Improved mission continuity during unforeseen events, such as UAV failures.
3.
Advanced Planning Capabilities:
  • Area and multi-objective planners optimized task distribution and minimized redundancy.
  • Transitioned seamlessly between mission, tasks, and re-tasking.
4.
Energy and Resource Management:
  • Energy-efficient strategies prioritized tasks based on battery levels.
  • Reduced idle times enhanced overall swarm productivity.
In this second experiment, the subject of the main analysis remained the validation of decentralized collision avoidance mechanisms within multi-agent cooperative operations. However, this trial specifically highlights the system’s behavior during simultaneous take-off of multiple VTOL aircraft—a particularly critical phase in swarm operations involving vertical launch profiles.
Three UAVs were commanded to take off at the same time from nearby ground locations. The recorded positional data were processed and analyzed using the same methodology described previously: Euclidean distances between each UAV pair were computed at every time step, and events were classified into risk (10–40 m), danger (5–10 m), and near-collision (<5 m) zones.
The analysis, summarized in Table 3, revealed significantly higher proximity violations compared to the previous simulated mission. A total of 229 risk events, 616 danger events, and 210 near-collision events were recorded, accounting for 0.64% of the total pair-time. The most critical interaction occurred between UAV 1 and UAV 2, which reached a minimum separation of 2.04 m at t = 35.4 s, indicating an actual physical overlap or very close proximity during vertical ascent.
As illustrated in Figure 50, the vast majority of proximity violations occurred within the first few seconds of the mission—specifically during the collective take-off maneuver. The temporal clustering, in Figure 51, of risk and danger events suggests that the current collision avoidance strategy is not sufficiently reactive during high-density vertical operations, particularly when multiple aircraft initiate flight from closely spaced positions.
These findings underscore a key operational lesson: simultaneous VTOL launches from clustered locations introduce unnecessary risk and should be avoided in practice. Instead, staggered or spatially separated take-off procedures are recommended to mitigate early-phase collision hazards. While the swarm logic proved effective in later phases of flight, this result highlights the need to integrate specialized take-off phase safety handling into the swarm mission planner.

5. Conclusions

This study presents a deterministic logic-based framework for controlling a heterogeneous swarm of Unmanned Aerial Vehicles (UAVs), integrated with a manned–unmanned teaming (MUM-T) module within the Robot Operating System (ROS) environment. The framework effectively coordinates and executes complex missions by leveraging UAV capabilities and enabling seamless cooperation with manned aircraft.
Through extensive simulations and real-world experiments, the system demonstrated robustness and adaptability under dynamic and challenging conditions. The use of deterministic logic resulted in accurate and predictable control, enhancing both the safety and operational efficiency of the swarm. Integration with ROS facilitated scalability and compatibility, enabling the inclusion of new UAVs and continuous framework improvements.
The experiments validated the system’s advanced capabilities, including the following:
  • Sophisticated mission planners for area surveillance, perimeter security, and multi-objective tasks.
  • Collision avoidance algorithms that ensured a 100% success rate in preventing UAV collisions, even in dense airspace.
  • Real-time decision-making for dynamic mission reallocation, reducing operator workload and optimizing task execution.
These results emphasize the potential of this framework for practical applications such as surveillance, disaster response, and reconnaissance. The system’s adaptability and collaborative capabilities provide a solid foundation for further research and deployment in civilian, military, and search-and-rescue missions.
Future efforts will focus on expanding the system’s capabilities, including the following:
1.
Scalability:
  • Extending the swarm to manage a larger number of UAVs while maintaining efficiency and coordination.
2.
Advanced Communication Protocols:
  • Developing robust protocols to improve resilience in contested or challenging environments.
3.
Machine Learning Integration:
  • Implementing predictive analytics and dynamic mission planning using machine learning models to enhance decision-making.
4.
Enhanced Manned–Unmanned Collaboration:
  • Refining collaboration techniques to broaden utility across diverse mission scenarios, including military and civilian operations.
As final remarks, this study introduces a practical and innovative framework for managing heterogeneous UAV swarms, bridging the gap between human operators and autonomous systems. By combining deterministic logic with the scalability and flexibility of ROS, this system is poised to meet the demands of modern autonomous operations, laying a foundation for next-generation collaborative aerial robotics.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/aerospace12080702/s1. File S1: Detailed Algorithms and Parameterization for Collision Avoidance and Missions Planning in a Multi-Domain Swarm Framework.

Author Contributions

Conceptualization, J.M.; Development, J.M.; Test, J.M.; Writing, J.M. and S.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Video and raw data supporting the tests and conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sahin, E. Swarm Robotics: From Sources of Inspiration to Domains of Application. In International Workshop on Swarm Robotics; EEE/RSJ International; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  2. Parunak, H.V. Making Swarming Happen. In Proceedings of the Conference on Swarming and C4ISR, Tysons Corner, VA, USA, 3 January 2003. [Google Scholar]
  3. Cao, Y.U.; Fukunaga, A.S.; Kahng, A.B.; Meng, F. Cooperative Mobile Robotics: Antecedents and Directions. In Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems, Human Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, 5–9 August 1995; Volume 1, pp. 226–234. [Google Scholar] [CrossRef]
  4. Beni, G. Swarm Intelligence in Cellular Robotic Systems. In Robots and Biological Systems: Towards a New Bionics; NATO ASI Series; Springer: Berlin/Heidelberg, Germany, 1993; Volume 102, pp. 703–712. [Google Scholar]
  5. Beni, G. From Swarm Intelligence to Swarm Robotics. In Swarm Robotics; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3342. [Google Scholar]
  6. Dekker, A. A Taxonomy of Network Centric Warfare Architectures. Syst. Eng./Test Eval. 2007, 3, 103–104. [Google Scholar]
  7. Chung, T.K. 50 vs. 50 by 2015: Swarm vs. Swarm UAV Live-Fly Competition at the Naval Postgraduate School. In Proceedings of the AUSVI, Atlanta, GA, USA, 4–7 May 2015. [Google Scholar]
  8. Frew, E. Airborne Communication Networks for Small Unmanned Aircraft. Proc. IEEE 2008, 96, 2008–2027. [Google Scholar] [CrossRef]
  9. Weiskopf, F.T. Control of Cooperative, Autonomous Unmanned Aerial Vehicles. In Proceedings of the First AIAA Technical Conference and Workshop on UAV, Portsmouth, VA, USA, 20–22 May 2002. [Google Scholar]
  10. Sánchez-López, J.L.; Molina, M.; Bavle, H.; Sampedro, C.; Suárez-Fernández, R.A.; Campoy, P. A Multi-Layered Component-Based Approach for the Development of Aerial Robotic Systems: The Aerostack Framework. J. Intell. Robot. Syst. 2017, 88, 686–709. [Google Scholar] [CrossRef]
  11. Fernández-Cortizas, M.; Molina, M.; Arias-Pérez, P.; Pérez-Segui, R.; Pérez-Saura, D.; Campoy, P. Aerostack2: A software framework for developing multi-robot aerial system. arXiv 2023, arXiv:2303.18237. [Google Scholar] [CrossRef]
  12. Villemure, É.; Arsenault, P.; Lessard, G.; Constantin, T.; Dubé, H.; Gaulin, L.-D.; Groleau, X.; Laperrière, S.; Quesnel, C.; Ferland, F. SwarmUS: An open hardware and software on-board platform for swarm robotics development. arXiv 2022, arXiv:2203.02643. [Google Scholar]
  13. Starks, M.; Gupta, A.; Oruganti Venkata, S.S.; Parasuraman, R. HeRoSwarm: Fully-Capable Miniature Swarm Robot Hardware Design with Open-Source ROS Support. arXiv 2022, arXiv:2211.0301. [Google Scholar]
  14. Loi, A.; Macabre, L.; Fersula, J.; Amini, K.; Cazenille, L.; Caura, F.; Guerre, A.; Gourichon, S.; Dauchot, O.; Bredeche, N. Pobogot: An Open-Hardware Open-Source Low Cost Robot for Swarm Robotics. arXiv 2025, arXiv:2504.08686. [Google Scholar]
  15. Kedia, P.; Rao, M. GenGrid: A Generalised Distributed Experimental Environmental Grid for Swarm Robotics. arXiv 2025, arXiv:2504.20071. [Google Scholar]
  16. Community. UAVros: PX4 Multi-Rotor UAV and UGV Swarm Simulation Kit. ROS Discourse. 2024. Available online: https://discuss.px4.io/t/announcing-uavros-a-ros-kit-for-px4-multi-rotor-uav-and-ugv-swarm-simulation/42132 (accessed on 4 May 2025).
  17. Tastier, A. “PX4 Swarm Controller.” GitHub Repository. 2023. Available online: https://github.com/artastier/PX4_Swarm_Controller (accessed on 4 May 2025).
  18. Kaiser, T.K.; Begemann, M.J.; Plattenteich, T.; Schilling, L.; Schildbach, G.; Hamann, H. ROS2swarm: A ROS 2 Package for Swarm Robot Behaviors. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
  19. Zimmermann, J.; Rinner, B.; Schindler, T. ROS2swarm: A Modular Swarm Behavior Framework for ROS 2. arXiv 2024, arXiv:2405.02438. [Google Scholar]
  20. Benavidez, P.; Jamshidi, M. Multi-domain robotic swarm communication system. In Proceedings of the IEEE International Systems Conference (SoSE), San Antonio, TX, USA, 16–18 April 2008. [Google Scholar]
  21. Nguyen, L. Swarm Intelligence-Based Multi-Robotics: A Comprehensive Review. AppliedMath 2024, 4, 1192–1210. [Google Scholar] [CrossRef]
  22. Guihen, D. The Barriers and Opportunities of Effective Underwater Autonomous Swarms. In Thinking Swarms; Springer: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  23. St-Onge, D.; Pomerleau, F.; Beltrame, G. OS and Buzz: Consensus-based behaviors for heterogeneous teams. arXiv 2017, arXiv:1710.08843. [Google Scholar]
  24. Sardinha, H.; Dragone, M.; Patricia, A. Closing the Gap in Swarm Robotics Simulations: An Extended Ardupilot/Gazebo Plugin. arXiv 2018, arXiv:1811.06948. [Google Scholar] [CrossRef]
  25. Adoni, W.; Fareedh, J.S.; Lorenz, S.; Gloaguen, R.; Madriz, Y.; Madriz, Y.; Madriz, Y. Intelligent Swarm: Concept, Design and Validation of Self-Organized UAVs Based on Leader–Followers Paradigm. Drones 2024, 8, 575. [Google Scholar] [CrossRef]
  26. Mao, P.; Lv, S.; Min, C.; Shen, Z.; Quan, Q. An Efficient Real-Time Planning Method for Swarm Robotics Based on an Optimal Virtual Tube. arXiv 2025, arXiv:2505.01380. [Google Scholar] [CrossRef]
  27. Darush, Z.; Martynov, M.; Fedoseev, A.; Shcherbak, A.; Tsetserukou, D. SwarmGear: Heterogeneous Swarm of Drones with Reconfigurable Leader Drone. arXiv 2023, arXiv:2304.02956. [Google Scholar] [CrossRef]
  28. Gupta, A.; Baza, A.; Dorzhieva, E.; Alper, M. SwarmHive: Heterogeneous Swarm of Drones for Robust Autonomous Landing on Moving Robot. arXiv 2022, arXiv:2206.08856. [Google Scholar] [CrossRef]
Figure 1. General architecture.
Figure 1. General architecture.
Aerospace 12 00702 g001
Figure 2. Software blocks.
Figure 2. Software blocks.
Aerospace 12 00702 g002
Figure 3. Types of UxV.
Figure 3. Types of UxV.
Aerospace 12 00702 g003
Figure 4. Command and control blocks.
Figure 4. Command and control blocks.
Aerospace 12 00702 g004
Figure 5. Simplified representation of the finite state machine governing mission execution logic.
Figure 5. Simplified representation of the finite state machine governing mission execution logic.
Aerospace 12 00702 g005
Figure 6. State plan switch case flow chart.
Figure 6. State plan switch case flow chart.
Aerospace 12 00702 g006
Figure 7. State go mission switch.
Figure 7. State go mission switch.
Aerospace 12 00702 g007
Figure 8. State emergency switch case.
Figure 8. State emergency switch case.
Aerospace 12 00702 g008
Figure 9. Mission manager loop.
Figure 9. Mission manager loop.
Aerospace 12 00702 g009
Figure 10. Mission planning case.
Figure 10. Mission planning case.
Aerospace 12 00702 g010
Figure 11. Voting state.
Figure 11. Voting state.
Aerospace 12 00702 g011
Figure 12. Pre-mission state.
Figure 12. Pre-mission state.
Aerospace 12 00702 g012
Figure 13. Mission state.
Figure 13. Mission state.
Aerospace 12 00702 g013
Figure 14. Replan state.
Figure 14. Replan state.
Aerospace 12 00702 g014
Figure 15. Sending mission state.
Figure 15. Sending mission state.
Aerospace 12 00702 g015
Figure 16. Waiting mission state.
Figure 16. Waiting mission state.
Aerospace 12 00702 g016
Figure 17. Mission inheritance.
Figure 17. Mission inheritance.
Aerospace 12 00702 g017
Figure 18. Voting flowchart.
Figure 18. Voting flowchart.
Aerospace 12 00702 g018
Figure 19. GCS architecture.
Figure 19. GCS architecture.
Aerospace 12 00702 g019
Figure 20. General C2 communication.
Figure 20. General C2 communication.
Aerospace 12 00702 g020
Figure 21. Node time slot calculation scheme.
Figure 21. Node time slot calculation scheme.
Aerospace 12 00702 g021
Figure 22. GCS basic interface.
Figure 22. GCS basic interface.
Aerospace 12 00702 g022
Figure 23. Fleet command button.
Figure 23. Fleet command button.
Aerospace 12 00702 g023
Figure 24. Base and GCS status bar.
Figure 24. Base and GCS status bar.
Aerospace 12 00702 g024
Figure 25. Ready for mission.
Figure 25. Ready for mission.
Aerospace 12 00702 g025
Figure 26. Fast actions menu.
Figure 26. Fast actions menu.
Aerospace 12 00702 g026
Figure 27. List of active vehicles.
Figure 27. List of active vehicles.
Aerospace 12 00702 g027
Figure 28. Vehicle PFD.
Figure 28. Vehicle PFD.
Aerospace 12 00702 g028
Figure 29. 6 Fixed-wing swarm trajectories.
Figure 29. 6 Fixed-wing swarm trajectories.
Aerospace 12 00702 g029
Figure 30. Top view of six fixed-wing swarm trajectories.
Figure 30. Top view of six fixed-wing swarm trajectories.
Aerospace 12 00702 g030
Figure 31. Individual UAVs trajectories.
Figure 31. Individual UAVs trajectories.
Aerospace 12 00702 g031
Figure 32. 6 UAVs Vertical trajectories.
Figure 32. 6 UAVs Vertical trajectories.
Aerospace 12 00702 g032
Figure 33. Area surveillance.
Figure 33. Area surveillance.
Aerospace 12 00702 g033
Figure 34. Replanning and re-tasking.
Figure 34. Replanning and re-tasking.
Aerospace 12 00702 g034
Figure 35. Add action from target to area coverage mission.
Figure 35. Add action from target to area coverage mission.
Aerospace 12 00702 g035
Figure 36. Payload Control Station with four video streams and autodetection/targeting.
Figure 36. Payload Control Station with four video streams and autodetection/targeting.
Aerospace 12 00702 g036
Figure 37. Six VTOL simulation risk event.
Figure 37. Six VTOL simulation risk event.
Aerospace 12 00702 g037
Figure 38. Six VTOL experiment separation pairs.
Figure 38. Six VTOL experiment separation pairs.
Aerospace 12 00702 g038
Figure 39. Three VTOLs ready to take off.
Figure 39. Three VTOLs ready to take off.
Aerospace 12 00702 g039
Figure 40. Top view of the three VTOLs.
Figure 40. Top view of the three VTOLs.
Aerospace 12 00702 g040
Figure 41. 3D trajectories during the flight test.
Figure 41. 3D trajectories during the flight test.
Aerospace 12 00702 g041
Figure 42. Top view of 3D trajectories during the flight test.
Figure 42. Top view of 3D trajectories during the flight test.
Aerospace 12 00702 g042
Figure 43. Individual UAV trajectories.
Figure 43. Individual UAV trajectories.
Aerospace 12 00702 g043
Figure 44. Vertical trajectories.
Figure 44. Vertical trajectories.
Aerospace 12 00702 g044
Figure 45. Target surveillance of three fixed-wing UAVs.
Figure 45. Target surveillance of three fixed-wing UAVs.
Aerospace 12 00702 g045
Figure 46. Dubins path for route surveillance.
Figure 46. Dubins path for route surveillance.
Aerospace 12 00702 g046
Figure 47. Route surveillance.
Figure 47. Route surveillance.
Aerospace 12 00702 g047
Figure 48. Re-tasking to target and route replanning.
Figure 48. Re-tasking to target and route replanning.
Aerospace 12 00702 g048
Figure 49. Area coverage.
Figure 49. Area coverage.
Aerospace 12 00702 g049
Figure 50. Three VTOL simulation risk events.
Figure 50. Three VTOL simulation risk events.
Aerospace 12 00702 g050
Figure 51. Three VTOL experiment separation pairs.
Figure 51. Three VTOL experiment separation pairs.
Aerospace 12 00702 g051
Table 1. Swarm framework comparison.
Table 1. Swarm framework comparison.
FrameworkROS VersionUAV/UGV SupportDistributed ControlReal Hardware IntegrationModularityPayload Management Support
UAVrosROS1/ROS2YesPartialNoMediumNot included
PX4 Swarm ControllerROS2YesYesNoHighNot included
Aerostack2ROS2YesYesYesHighBasic (camera/GPS only)
ROS2swarmROS2YesYesPartial (Sim-only)HighNot included
SwarmUSCustom/ROS1PartialYesPartialMediumNo payload-level integration
HeRoSwarmROS1No (mini swarm)YesPartial (lab-scale robots)MediumNo
PobogotCustomNo (lab-scale)YesNoMediumNo
This workROS1YesYesYesHighYes (modular, advanced)
Table 2. 6 VTOL experiment events.
Table 2. 6 VTOL experiment events.
PairMinDistance_mRiskEventsDangerEventsNearCollisionEvents
UAV 1–212.2914400
UAV 1–349.30000
UAV 1–491.27000
UAV 1–589.48000
UAV 1–636.61800
UAV 2–342.43000
UAV 2–442.14000
UAV 2–567.39000
UAV 2–624.566600
UAV 3–413.865400
UAV 3–510.2112200
UAV 3–6119.50000
UAV 4–58.0924840
UAV 4–656.44000
UAV 5–674.55000
Table 3. Three VTOL experiment events.
Table 3. Three VTOL experiment events.
PairMinDistance_mRiskEventsDangerEventsNearCollisionEvents
UAV 1–22.049414210
UAV 1–35.06123980
UAV 2–35.101232040
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martin, J.; Esteban, S. ROS-Based Multi-Domain Swarm Framework for Fast Prototyping. Aerospace 2025, 12, 702. https://doi.org/10.3390/aerospace12080702

AMA Style

Martin J, Esteban S. ROS-Based Multi-Domain Swarm Framework for Fast Prototyping. Aerospace. 2025; 12(8):702. https://doi.org/10.3390/aerospace12080702

Chicago/Turabian Style

Martin, Jesus, and Sergio Esteban. 2025. "ROS-Based Multi-Domain Swarm Framework for Fast Prototyping" Aerospace 12, no. 8: 702. https://doi.org/10.3390/aerospace12080702

APA Style

Martin, J., & Esteban, S. (2025). ROS-Based Multi-Domain Swarm Framework for Fast Prototyping. Aerospace, 12(8), 702. https://doi.org/10.3390/aerospace12080702

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop