Next Article in Journal
Unmanned Aerial Vehicle-Enabled Aerial Radio Environment Map Construction: A Multi-Stage Approach to Data Sampling and Path Planning
Next Article in Special Issue
A Novel Approach for Maize Straw Type Recognition Based on UAV Imagery Integrating Height, Shape, and Spectral Information
Previous Article in Journal
Research on UAV Trajectory Planning Algorithm Based on Adaptive Potential Field
Previous Article in Special Issue
Improved Estimation of Aboveground Biomass in Rubber Plantations Using Deep Learning on UAV Multispectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Drone System Proof of Concept for Forestry Applications

by
André G. Araújo
1,2,*,
Carlos A. P. Pizzino
1,
Micael S. Couceiro
1,2 and
Rui P. Rocha
2
1
Ingeniarius, Ltd., 4445 Alfena, Portugal
2
Institute of Systems and Robotics, Department of Electrical and Computer Engineering, University of Coimbra, 3030 Coimbra, Portugal
*
Author to whom correspondence should be addressed.
Drones 2025, 9(2), 80; https://doi.org/10.3390/drones9020080
Submission received: 19 December 2024 / Revised: 9 January 2025 / Accepted: 18 January 2025 / Published: 21 January 2025

Abstract

:
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry via Smoothing and Mapping (LIO-SAM), and Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm (DCL-SLAM), seamlessly integrated within the MRS UAV System and Swarm Formation packages. This integration is achieved through a series of procedures compliant with Robot Operating System middleware (ROS), including an auto-tuning particle swarm optimisation method for enhanced flight control and stabilisation, which is crucial for autonomous operation in challenging environments. Field experiments conducted in a forest with multiple drones demonstrate the system’s ability to navigate complex terrains as a coordinated swarm, accurately and collaboratively mapping forest areas. Results highlight the potential of this proof of concept, contributing to the development of scalable autonomous solutions for forestry management. The findings emphasise the significance of integrating multiple open-source technologies to advance sustainable forestry practices using swarms of drones.

1. Introduction

The utilisation of Unmanned Aerial Vehicles (UAVs), commonly denoted as drones, has increasingly garnered attention for their potential to revolutionise challenging applications, such as forestry and agricultural [1,2]. These aerial robotic systems promise to enhance efficiency, reduce labour costs, and provide high-resolution data for better decision-making. Forest management with drones, in particular, represents a significant step forward in sustainable environmental practices [3,4]. Traditional methods of forest management often involve extensive manual labour and are limited by the physical capabilities of human operators. The advent of these technologies offers a transformative approach by enabling detailed and rapid data collection across vast forested areas. This technology’s ability to operate autonomously and collaboratively makes it particularly suitable for tasks such as mapping, monitoring, and managing forest resources [5].
The current state of the art of drone deployments includes various frameworks for simultaneous localisation and mapping (SLAM), path planning, navigation, and decision-making, which are crucial for autonomous navigation and data collection [6]. Open-source frameworks developed by the Robot Operating System (ROS) community, like LIO-SAM (https://github.com/TixiaoShan/LIO-SAM, accessed on 20 January 2025), DCL-SLAM (https://github.com/PengYu-Team/DCL-SLAM, accessed on 20 January 2025), MRS UAV System (https://github.com/ctu-mrs/mrs_uav_system, accessed on 20 January 2025) and Swarm Formation (https://github.com/ZJU-FAST-Lab/Swarm-Formation, accessed on 20 January 2025) have shown promise in providing robust solutions for these tasks, be it with single or multiple drones. However, integrating all these frameworks into a cohesive swarm of drones that can function efficiently in real-world scenarios remains a challenge. This integration requires robust control algorithms and procedures for effective collaboration among multiple drones deployed in forestry environments and in other target applications of field robots. This paper is all about such efficient integration, exploring a multi-drone system proof of concept for efficient forest mapping and autonomous operation within the context of the OPENSWARM EU Project.

1.1. The OPENSWARM EU Project

The OPENSWARM EU Project (https://openswarm.eu/, accessed on 20 January 2025) aims to develop a comprehensive code base and test it across multiple proof of concept (PoC) scenarios, enabling swarms of collaborative smart nodes with wide range benefits for the environment, industries, and society. The project’s focus is on creating a dependable, secure, and efficient framework, covering the orchestration of collaborative smart nodes, collaborative energy-aware artificial intelligence (AI), and energy-aware swarm programming.
One of OPENSWARM’s PoCs is dedicated to forestry, specifically targeting the mapping and management of forest resources using swarms of drones. This PoC aims to assess most of OPENSWARM’s contributions, including collaborative AI task distribution and execution, ensuring that drones can operate efficiently and persistently over extended periods in the forest. The integration of these components into a unified architecture is critical for validating the effectiveness of the OPENSWARM approach in real-world forestry scenarios. Nonetheless, this should be preceded by the development and implementation of a foundation of capabilities. Capabilities include general task-centric approaches and robotic abilities, such as localisation, path planning, navigation, and mapping, as well as specific swarm behaviours, such as swarm formation.

1.2. Research Question and Objectives

This study aims to contribute to the growing body of knowledge on the use of swarms of drones for sustainable forest management and demonstrate the practical viability of the OPENSWARM project in achieving its ambitious goals. In line with this, the primary research question addressed in this study is, how can a multi-drone system be effectively utilised for forest mapping and autonomous operation?
To answer this, we focus on the following objectives:
  • Integrate and enhance state-of-the-art open source frameworks for:
    Collaborative localisation and mapping (DCL-LIO-SAM, as proposed in [7]);
    Control, planning, and navigation (MRS UAV System [6]);
    Formation Flight (Swarm Formation [8]);
  • Develop and apply an auto-tuning particle swarm optimisation (PSO) algorithm to enhance flight control and stabilisation;
  • Conduct field experiments in real-world forest environments to evaluate the system’s performance in operational environments under real conditions;
  • Assess the potential scalability and adaptability of the system for broader forestry management applications, namely by benefiting from the OPENSWARM code base.

1.3. Organisation of the Article

This article is structured as follows: Section 2 provides an overview of the related work and current state of research in drone-based forestry applications. Section 3 details the methodology used for integrating the various capabilities of the multi-drone system. Section 4 presents the experimental setup and the results obtained from field trials. Section 5 discusses the implications of the findings, potential improvements through the integration of the OPENSWARM code base, and future research directions. Finally, Section 6 concludes the paper with a summary of the main contributions and their significance for the field of forestry management.

2. Related Work

The application of drones in various fields has seen significant advancements in recent years [9]. This section reviews the existing literature and current state of research in drone-based applications, focusing particularly on forestry management, with an emphasis on forestry inventory (e.g., cataloguing plants and fruits, assessing tree and vegetation density in a certain location, among other tasks), which aligns with the objectives of the OPENSWARM PoC addressed in this study. The related work is categorised into four main areas: the use of drones in forestry, single and multi-drone SLAM, autonomous navigation challenges and solutions, and swarm robotics with drones. These categories provide a comprehensive overview of the technological advancements, challenges, and future directions in the domain of drone-based forestry applications.

2.1. Drone-Based Applications in Forestry

In general, robots have emerged as valuable tools in forestry applications, with ground robots taking the lead by offering distinct advantages in navigating and performing tasks on the forest floor [4]. Ground robots can interact directly with terrain and vegetation, making them ideal for tasks, such as soil analysis, undergrowth clearing, and tree planting. Generally equipped with a large range of sensors, including LiDARs, cameras, global navigation satellite systems (GNSSs), and sometimes radar technologies, ground robots excel in precision tasks within the challenging and often uneven forest environment. However, while ground robots are effective for specific forestry applications, drones present several advantages that make them particularly suited for forestry tasks.
The utilisation of drones in forestry has become increasingly prevalent due to their ability to cover wide areas and collect high-resolution data rapidly and cost-effectively. Drones fill critical gaps left by other data collection methods, such as manned aircraft or satellite remote sensing, which can be limited in resolution and flexibility [10]. One primary application of drones in forestry is mapping, where they provide detailed data of forest structure, composition, and dynamics [11]. For instance, drones equipped with cameras and LiDAR sensors can create accurate canopy height models and assess forest biomass and volume [12].
Precision forestry, which involves detailed monitoring and management of forest resources, greatly benefits from drone technology. Parameters, such as canopy cover, tree count, volume estimation, and health, can be quickly and accurately determined using drone-based remote sensing [13]. Studies have shown that drones can significantly reduce the time and cost associated with data collection while providing high-quality data. For example, Mokro et al. [14] demonstrated that drones could estimate the volume of wood chip piles with a time efficiency up to twenty times faster than traditional methods.
Another critical application is forest fire monitoring and management. Drones equipped with thermal cameras and AI algorithms can detect early signs of forest fires, enabling rapid response and potentially mitigating damage [15]. This approach has been shown to be effective in various scenarios, from continuous surveillance to targeted inspections after initial alerts from high-altitude drones.
In addition to these applications, drones are also used for monitoring forest health and biodiversity. For example, drones can map canopy gaps and detect pest infestations, which are crucial for understanding and managing forest ecosystems [16], by using a sensor system based on multispectral imaging. These capabilities allow for the detection of small-scale disturbances that might be missed by satellite imagery, thus providing more comprehensive forest management data.
While the advantages of drones in forestry are evident, there are also limitations. Challenges include the need for skilled operators when full autonomous navigation cannot be guaranteed, regulatory restrictions, and the high initial cost of equipment. Additionally, the processing and analysis of large volumes of data generated by drones require sophisticated software and expertise [13]. Despite these challenges, the ongoing advancements in drone technology, in particular in terms of autonomous navigation and data processing, as well as the deployment of multi-robot solutions within real-world applications, are likely to enhance their effectiveness and accessibility in forestry applications [4].

2.2. Single and Multi-Drone SLAM

Recent advancements in LiDAR-inertial odometry (LIO) have significantly improved localization efficiency and robustness, laying the groundwork for precise robotic navigation in complex environments. For example, Direct LiDAR odometry (DLO) [17] introduced a fast and computationally efficient method for odometry using dense point clouds directly. Its successor, Direct LiDAR-inertial odometry (DLIO) [18], enhanced this by incorporating continuous-time motion correction, enabling efficient odometry for applications requiring rapid adaptability. Faster-LIO [19] optimised LIO by leveraging parallel sparse incremental voxels for high-speed processing, making it lightweight. Similarly, Point-LIO [20] emphasised high-bandwidth odometry with a robust framework designed to handle challenging environments. More recently, iG-LIO [21] integrated generalised iterative closest point (GICP) with tightly coupled LIO, offering incremental updates that improve both precision and computational efficiency. Collectively, these advancements highlight ongoing efforts to balance accuracy, computational speed, and robustness in LIO methods.
However, localization alone is not sufficient for many applications, including forestry, where understanding and mapping the surrounding environment are equally critical [22]. This is where simultaneous localization and mapping (SLAM) technology comes into play, enabling robots to determine their position while creating detailed maps of their surroundings. This mapping capability is particularly valuable in unknown, dynamic, and unstructured environments like forests, where precise navigation and spatial understanding are essential for effective operation.
Single-drone SLAM has been extensively studied and developed over the past decade. Popular SLAM approaches, such as ORB-SLAM [23], LSD-SLAM [24], Light-LOAM [25], SR-LIVO [26], and LIO-SAM [27], have demonstrated robust performance in structured indoor and urban environments. However, forests present unique challenges for SLAM due to their complex and dynamic nature. Factors such as dense foliage, varying lighting conditions, and the presence of moving elements, like leaves and branches, can significantly impact the performance of SLAM systems, especially vision-based approaches [28].
Recent advances have aimed to address these challenges by optimising SLAM algorithms for forest environments. For instance, visual appearance analysis techniques have been employed to improve the robustness of monocular SLAM systems in forests by identifying key attributes, such as lighting changes and scene motion [28]. Furthermore, integrating multiple sensor modalities, such as visual, LiDAR, and inertial measurement units (IMUs), has shown promise in enhancing SLAM performance under canopy cover, where GNSS signals are unreliable [29].
Another promising approach to improving SLAM performance under challenging scenarios is the use of collaborative SLAM (CSLAM), which integrates contributions from multiple drones [30]. Multi-drone SLAM leverages the collective capabilities of multiple drones to enhance mapping accuracy, coverage, and robustness. This approach significantly reduces the time required to map large areas and improves the reliability of the generated maps through collaborative data fusion [31].
Multi-drone SLAM systems, such as DCL-SLAM, enable collaborative localization and mapping by sharing information between drones in real time [7]. In forest environments, these systems often utilise decentralized algorithms, allowing each drone to independently estimate its pose while periodically exchanging data with other drones to correct drift and improve localization accuracy [32]. The integration of SLAM with swarm robotics principles further enhances the scalability and adaptability of these systems, making them well-suited for dynamic and unstructured environments like forests [33].
Key studies in the field have demonstrated the feasibility and benefits of multi-drone SLAM in various scenarios. For example, the use of multi-UAV systems for forest monitoring has shown improved performance in terms of mapping accuracy and operational efficiency compared to single-drone systems [34]. Additionally, the development of robust communication protocols and optimization algorithms has been crucial in enabling effective collaboration among multiple drones, ensuring seamless data exchange and coordinated operation [32].
Overall, while single-drone SLAM provides a foundational capability for autonomous navigation, multi-drone SLAM represents a significant advancement in the field, offering enhanced performance and scalability for complex applications such as forest mapping and management. Nonetheless, achieving autonomous drone navigation in forests does not solely depend on the SLAM approach; it also requires a reliable flight controller and navigation system.

2.3. Autonomous Navigation with Forestry Drones

The integration of autonomous navigation systems in forestry drones poses several unique challenges due to the complex and dynamic nature of forest environments. These challenges include dense vegetation, variable lighting conditions, uneven terrain, and the need for high precision in navigation to avoid obstacles, such as trees and branches [35]. Traditional GNSS-based navigation systems often struggle in such settings due to weak signals and the inability to account for dynamic changes in the environment [36].
As a follow-up to the previous section, the incorporation of SLAM techniques has significantly enhanced the ability of drones to navigate autonomously in forests. As previously stated, SLAM enables drones to build and update maps of their environment while keeping track of their location, crucial for navigation in GNSS-denied environments. Studies, such as the one by Baca et al. [6], highlight the importance of combining SLAM with advanced sensor fusion techniques to improve the accuracy and reliability of autonomous navigation systems in forestry applications.
Beyond SLAM, recent advancements in AI have enabled more robust autonomous navigation systems. Like for efficient SLAM in forests, these systems typically integrate various sensors for navigation in such scenarios. For instance, the work by Zhilenkov et al. [37] presents an intelligent autonomous navigation system for drones that operates effectively in randomly changing environmental conditions. The system combines GNSS and computer vision for localisation, route planning, and obstacle avoidance, leveraging neural networks to enhance decision-making processes and increase autonomy in navigation [37].
Path planning is another critical aspect of autonomous navigation in forestry drones. The adaptive vortex search algorithm, as proposed by Wang et al. [38], demonstrates effective path planning for forest fire rescue drones. This algorithm optimises flight paths by considering multiple objectives, such as path length, energy consumption, and terrain adaptation, ensuring the drones can navigate complex forest environments efficiently. The algorithm’s ability to handle 3D path planning makes it particularly suitable for forest applications, where terrain varies significantly [38].
As a subsequent process to perception and path planning, autonomous navigation relies on the adoption of robust flight controllers as well. These flight controllers play a critical role in ensuring the stability and precision of drones, especially in the complex and dynamic environments of forests. Recent advancements have focused on developing adaptive and intelligent flight control algorithms that can handle the challenges of forest navigation. For instance, the implementation of robust control systems that leverage real-time data from multiple sensors has shown significant improvements in flight stability and obstacle avoidance. Studies, like the work by Pestana et al. [39] presents a general-purpose configurable controller for both indoor and outdoor GNSS-denied navigation for drones. This system uses a down-facing camera for optical flow-based odometry, integrated within a robust controller architecture designed to handle various flight conditions.
In conclusion, the integration of SLAM, advanced AI techniques, robust path-planning algorithms, and reliable flight controllers are essential for developing effective autonomous navigation systems for forestry drones. These advancements address the unique challenges posed by forest environments, enabling drones to navigate accurately, avoid obstacles, and perform a wide range of forestry management tasks autonomously. Yet, just like SLAM approaches can be improved by combining contributions from multiple drones, the same can be stated for the autonomous navigation and operation in forests.

2.4. Swarm Robotics with Drones

The concept of swarm robotics [40], inspired by the collective behaviour of social insects, has emerged as a promising approach for enhancing the capabilities of autonomous robots in various applications, including forestry [41]. In this context, swarm robotics involves the coordination of multiple robots to perform complex tasks through simple local interactions, without centralised control. This approach leverages the strengths of individual robots while mitigating their limitations, thereby achieving robustness, scalability, and flexibility in operations [40].
In forestry applications with drones, swarm robotics offers several advantages, such as increased coverage area, redundancy, and the ability to perform tasks simultaneously, which are critical for efficient forest management. The decentralised nature of swarm robotics allows drones to adapt to dynamic environments, making real-time decisions based on local information. This is particularly beneficial in forest environments, where conditions can change rapidly, and obstacles are unpredictable and cluttered [42]. However, complexity increases with the number of drones. Vijay et al. [43] highlighted that obstacle avoidance becomes more complex when dealing with multiple robots compared to a single robot. It emphasised the challenges posed by inter-robot collision avoidance, ensuring safe spacing, and maintaining formation stability in a swarm setup. This complexity is compounded by scalability issues and the need for robust, decentralised systems for effective multi-robot coordination. Many swarm strategies lack the capability to avoid obstacles in cluttered environments in the real world. One such example is the work of Quan et al. [8], which presents an optimisation-based method that ensures collision-free trajectory generation for formation flight in challenging environments. The authors presented a novel differentiable metric to quantify the overall similarity distance between swarm formations, which allows for achieving spatial-temporal planning using polynomial trajectories.
Several other studies have demonstrated the potential of swarm robotics in forestry beyond formation control. For instance, the work by Madridano et al. [44] explored the use of swarms of drones for firefighting operations in forested areas. The study highlighted the effectiveness of swarm algorithms in coordinating multiple drones to cover large areas quickly and efficiently, thereby improving the chances of locating forest fires. For similar forest fire applications, the research by Hu et al. [45] proposed a fault-tolerant cooperation framework for networked drones, adopting a Lyapunov approach and a decentralised task reassignment algorithm to guarantee both the stability of the drones, as their robustness to deal with uncertainties.
One of the key challenges in implementing swarm robotics in forestry is the development of robust communication and coordination mechanisms. Effective communication is essential for maintaining cohesion within the swarm and ensuring that drones can share information and cooperate seamlessly. The use of wireless communication protocols, such as WiFi, Zigbee, and LoRa, has been explored to facilitate reliable communication in forest environments [46]. Additionally, the integration of AI techniques, such as machine learning and reinforcement learning, has shown promise in enhancing the decision-making capabilities of drones, allowing them to adapt to changing conditions and optimise their performance [47].
Another critical aspect of swarm robotics in forestry is the development of efficient task allocation and scheduling algorithms. These algorithms ensure that tasks are distributed optimally among the drones, maximising their collective efficiency and minimising resource consumption. The study by Peng et al. [48] surveyed the dynamic task allocation of swarms of drones by establishing an allocation model and the solution of the allocation model. The authors also evaluate several common dynamic task allocation algorithms, such as the algorithm based on market mechanisms, intelligent optimisation algorithm, and clustering algorithm.
Despite the significant progress in swarm robotics, several challenges remain to be addressed. These include improving the scalability of swarm algorithms, enhancing the robustness of communication networks, and developing more sophisticated coordination mechanisms. Nonetheless, in its current form and by further integration with other capabilities within a robust ecosystem, swarm robotics holds great potential for revolutionising forestry management, enabling efficient and autonomous operations through the coordination of multiple drones. The integration of robust hardware and related communication protocols, advanced SLAM and overall perception, as well as efficient under-the-canopy navigation systems, are critical for realising the full potential of swarm robotics in this domain. This paper proposes such integration effort with existing approaches, glued together by new developments, thus opening up new opportunities for improving forest monitoring, management, and conservation efforts.

3. Multi-Drone PoC System Integration

This section describes the structure and functionality of the proposed solution. The integration process involves combining the LIO-SAM [27], DCL-SLAM [7], MRS UAV System [6], and the Swarm Formation frameworks [8], into a unified system that leverages their individual strengths. The overarching goal is to create a robust, flexible, and efficient multi-drone system capable of performing complex forestry tasks autonomously.
The subsequent subsections will elaborate on the specific components and integration efforts undertaken to achieve this objective, highlighting the challenges encountered and the solutions implemented for real-world forestry operations. The proposed system architecture is presented in Figure 1. The diagram illustrates the architecture of a multi-robot system (MRS) designed for autonomous operations using multi-UAVs. It integrates key components for achieving real-time navigation, mapping, and cooperative behaviour within a swarm of drones, for our specific domain of application for this work is centered on forestry management, particularly focusing on forestry inventory tasks such as cataloging plants and fruits, assessing tree and vegetation density, and evaluating these factors in a given location.
The system is divided into several modules, each fulfilling a critical role:
  • UAV-specific Modules: Each drone is equipped with an onboard system that includes controllers, state estimators, and planners. These components ensure precise trajectory tracking and obstacle avoidance. The dataflow through sensors, such as LiDAR, GNSS, and IMU, supports real-time state estimation and navigation;
  • Mapping and Localisation Framework: The system employs a distributed SLAM framework. It includes modules like loop closure, pose graph optimisation, and keyframe selection to enable robust map generation and localisation;
  • Global Map Service: A centralised mapping service aggregates data from individual drones to build a comprehensive global map, facilitating coordinated navigation and task execution;
  • Swarm Formation and Communication: The system uses a formation tracker and path generator to manage the movements of multiple drones. Communication is facilitated through a peer-to-peer network, enabling seamless lightweight data sharing and synchronisation among drones;
  • Hardware Integration: The architecture incorporates sensors and actuators to provide environmental awareness and ensure smooth execution of control commands. These include 3D LiDAR, IMU, and GNSS modules;
  • Wireless Network: The wireless network forms the backbone for inter-drone communication, ensuring efficient collaboration within the swarm.
The colour-coded legend highlights components adapted, developed, or implemented from the original system.
The notation used in this article follows the one established in [6]. The drone state vector is represented by
x = ( r , r ˙ , r ¨ , R , R ˙ ) T ,
where the position of the drone is represented by the vector r = x , y , z T , and r ˙ and r ¨ correspond to linear and acceleration vectors, respectively. In addition, R = ϕ , θ , ψ represents the orientation in the world coordinate frame, and R ˙ is the angular velocity (Figure 2).
These states are interconnected through a non-linear model that includes translation and rotational components, as follows.
m r ¨ = f R e ^ 3 m g e ^ 3
and
R ˙ = R Ω ,
where m R is the nominal drone mass [kg], f R is the total thrust force produced by the propellers [N], g R is the gravitational acceleration [m s−2], Ω is the tensor of angular velocity, under the condition Ω v = ω × v , v R 3 , and ω = ω 1 , ω 2 , ω 3 is the angular velocity in the body frame of a drone.

3.1. Planning, Navigation, and Control

In forestry applications, control, planning, and navigation are critical for ensuring that autonomous drones perform their tasks efficiently and safely. Control systems maintain stability and manoeuvrability, enabling drones to respond to environmental conditions and execute precise movements. Planning involves path finding and task allocation, optimising routes to navigate complex forest environments and avoid obstacles. Navigation systems integrate sensors and algorithms to provide real-time positioning, essential for autonomous movement and coordination within swarms. Forests present unique challenges, such as dense vegetation and variable terrain, requiring robust control algorithms, effective path planning, and advanced navigation to operate in GNSS-denied areas. These integrated systems enable drones to operate independently, collaborate effectively, and adapt to dynamic conditions in real-time [49]. The following subsection explores the MRS UAV System, a framework that incorporates these elements for autonomous forestry operations.

3.1.1. MRS UAV System

The MRS UAV System, developed by the Multi-Robot Systems group at the Czech Technical University in Prague, is an advanced framework for controlling and managing drones in both simulated and real-world environments [6]. This framework was designed to support a wide range of research and operational tasks by providing a robust, flexible, and modular software architecture that facilitates the integration of various sensors, controllers, and mission-specific algorithms. Key components of the MRS UAV System include:
  • Multi-Frame Localisation: This component provides drone state estimation using multiple sensors across various reference frames. It ensures accurate localisation even in GNSS-denied environments by fusing data from the low-level controller and related sensors (IMU, barometer, GNSS, etc.), as well as the output from SLAM methods.
  • Feedback Control: The system features advanced feedback control mechanisms, including Model Predictive Control (MPC) and SE(3) geometric tracking, which enable precise and aggressive manoeuvring. These controllers are designed to handle noisy state estimates and ensure stability during complex flight operations.
  • Modular Software Architecture: The MRS UAV System is modular, facilitating easy integration and customisation. This modularity allows for the incorporation of new control methods, sensor systems, and mission-specific functionalities.
  • Realistic Simulations: The framework includes simulations of drones, allowing researchers to test and validate their algorithms in a controlled virtual environment before deploying them in real-world scenarios.
The potential strengths of the MRS UAV System for forestry drone applications are manifold:
  • Robust Localisation: The multi-frame localisation capability ensures the accurate positioning of multiple drones individually in dense forest environments, where GNSS signals are weak or obstructed. This is crucial for tasks, such as forest mapping and monitoring.
  • Precise Control: The advanced feedback control mechanisms enable drones to navigate through complex and cluttered environments, such as forests, with high precision and stability.
  • Scalability and Flexibility: The modular architecture allows for easy integration of additional sensors and control algorithms, making the system highly adaptable to various forestry applications.
  • Comprehensive Testing: The simulation environment enables thorough testing and validation of algorithms, ensuring reliable performance in real-world forestry missions.
The MRS UAV system is composed of multiple interconnected subsystems, as shown in Figure 1. However, these modules will not be the primary focus here, as their potential applications have already been demonstrated in previous work [6]. Basically, the sensor data are processed by the state estimation block, which provides the controllers with hypotheses of the states of the drones. The desired trajectory is processed by a trajectory tracker module, which receives the desired path from the MRS Octomap Planner (https://github.com/ctu-mrs/mrs_octomap_planner, accessed on 20 January 2025) (see Section 3.1.3). Therefore, this trajectory is converted into a feasible and smooth full-state control reference.
Within the context of this work, integrating the MRS UAV System into the proposed multi-drone PoC system involved several key steps to ensure seamless compatibility and functionality.

3.1.2. State Estimation

The purpose of the state estimator block (Figure 1) in the MRS UAV System is to manage the integration of multiple state estimators that provide flight data such as position, velocity, and orientation. It ensures a reliable fusion of sensor inputs and odometry/localisation methods while maintaining robustness against sensor failures and noise. Through dynamic switching between estimators and real-time data validation, the estimator manager improves navigation accuracy and system resilience, enabling drones to perform complex missions autonomously. The main features include the execution of a bank of estimators (Figure 3) according to the sensors and localisation systems, the generation of ROS TFs between all frames of reference, and the in-flight switching of the active estimator used for control.
In this work, a state estimation plugin based on corrections from the DCL-LIO-SAM (LIO-SAM front-end integrated with DCL-SLAM) [7] was built. DCL-LIO-SAM will be explained in Section 3.2.

3.1.3. Octomap Planner

The MRS Octomap Planner includes algorithms for path planning in 3D environments represented by a 3D occupancy grid known as octomap [50]. After planning the path, a trajectory is generated along it and sent as a request to the MRS trajectory tracker for execution. During trajectory generation, the node accounts for the drone’s current state and appends the new path to the existing prediction horizon while adhering to dynamic constraints. Additionally, the MRS Octomap Planner supports periodic path replanning, ensures that the current trajectory avoids collisions with the latest environmental map, and mitigates potential collisions by truncating any trajectory segments that intersect with obstacles.
This package has been changed to enhance inter-drone coordination by enabling drones to share their planned trajectories with one another. This modification ensures that each drone is aware of the trajectories of its peers, facilitating collaborative navigation and reducing the risk of trajectory conflicts within the swarm.

3.2. Localisation and Mapping

As previously stated, SLAM approaches are critical for autonomous drone operations in forestry applications. Accurate localisation allows drones to determine their position within a forest, which is essential for navigation, path planning, and task execution. Mapping, on the other hand, involves creating detailed representations of the forest environment, including terrain features, vegetation, and obstacles. These maps are crucial for various forestry management tasks, such as monitoring forest health, assessing biomass, detecting changes over time, and planning interventions. In dense and dynamic forest environments, traditional GNSS-based systems often fall short due to signal occlusion and multipath effects. Therefore, advanced SLAM techniques need to be employed to ensure reliable and precise localisation and mapping under these scenarios.
In order to enable multiple drones to work together in building a consistent and comprehensive map of the environment while maintaining accurate localization, collaborative SLAM methods are designed to incorporate mechanisms for data sharing, collaborative loop closure, and distributed optimization across a network of robots.
In this work, collaborative SLAM capabilities were achieved by integrating two frameworks, namely DCL-SLAM (Distributed Collaborative Localisation and Mapping) [7] and the front-end module based on LIO-SAM (LiDAR Inertial Odometry via Smoothing and Mapping) [27]. The integration of these two algorithms is called DCL-LIO-SAM.
DCL-SLAM is a framework designed for real-time, distributed localisation and mapping using collaborative multi-robot systems. It can enable multiple drones to work together to create a comprehensive map of an environment, while simultaneously localising themselves within that map. The collaborative nature of DCL-SLAM makes it particularly suitable for large-scale and complex environments, such as forests, where single-robot SLAM might be insufficient due to coverage limitations and environmental dynamics.
This framework is structured around three critical components that enable efficient collaborative mapping and localisation:
  • Single-robot Front-end: It supports the interface with single-robot front-ends with various LiDAR odometry in order to ensure accurate pose estimation and local map generation;
  • Distributed Loop Closure: It facilitates inter-robot collaboration by identifying overlapping areas between maps generated by different drones, using lightweight descriptors for efficient place recognition;
  • Distributed Back-end: It integrates these loop closures into a pose graph optimization process, refining the global map and ensuring consistent and accurate localization across the swarm.

3.2.1. Single-Robot Front-End

In this work, LIO-SAM is adopted as a single-robot front-end. LIO-SAM is an advanced framework designed for real-time LiDAR-inertial odometry, further enhanced with GNSS estimations whenever available, using factor graph optimisation. Developed by Shan et al. [27], LIO-SAM leverages a tightly coupled, iterative smoothing and mapping approach to ensure high-precision odometry, making it suitable for dynamic and cluttered environments such as forests. The LIO-SAM framework operates by processing raw LiDAR scans and IMU measurements, which can be used to estimate the drone’s pose in real-time. The key components of the LIO-SAM framework include the following:
  • Preprocessing: This step involves extracting features from raw LiDAR scans. These features include edge and planar points, which are critical for accurate pose estimation.
  • IMU Integration: IMU measurements are integrated to provide high-frequency motion estimates, which can be used to predict the drone’s pose between LiDAR scans.
  • Factor Graph Optimisation: LIO-SAM constructs a factor graph that incorporates LiDAR, IMU, and GNSS measurements. The factor graph is optimised using an iterative smoothing algorithm, which refines the pose estimates by minimising the overall error.
  • Loop Closure: The framework also includes a loop closure mechanism to detect and correct drift by recognising previously visited locations. This enhances the long-term accuracy and consistency of the map.

3.2.2. Distributed Loop Closure

The distributed loop closure module in DCL-SLAM enables effective collaboration among multiple robots by detecting overlapping regions between their respective maps. It operates through a four-step process: keyframe selection, description, search, and verification. Keyframes are selected when significant changes in position or orientation occur, and a lightweight global descriptor, LiDAR-Iris, is used for efficient place recognition with minimal data exchange. A three-stage communication pipeline is employed to share descriptors, filtered point clouds, and matching results, ensuring low bandwidth usage and resilience to limited connectivity. Detected loop closures are verified using techniques like ICP and RANSAC, and the resulting relative poses are integrated into a distributed optimization process. This module reduces accumulated drift and contributes to creating a consistent global map across the swarm.

3.2.3. Distributed Back-End

The estimated poses of drones are based on Pose Graph Optimisation (PGO) [51]. It is a method used to optimise the estimated poses of robots by minimising the error across a graph that represents the spatial relationships between different robots or between different points in a single robot’s trajectory. Each node in the pose graph corresponds to a robot’s pose at a particular time, and the edges represent measurements or constraints (e.g., odometry or loop closures between two robots). In DCL-SLAM, the PGO module consists of three types of factors:
  • Odometry factors ( F o d o m r ): these are constraints based on the incremental movement of a robot (using LiDAR odometry);
  • Intra-robot loop closure factors ( F i n t r a r ): these help to correct drift within a single robot’s trajectory;
  • Inter-robot loop closure factors ( F i n t e r r ): these constraints come from loop closures between different robots.

3.2.4. Advancements in Frameworks

Within the context of this work, the integration of these frameworks within the proposed multi-drone PoC system integration involved minor key steps to ensure compatibility and functionality within the broader system architecture. These efforts included the following:
  • Enable IMU frequency configuration: Add support for IMU nominal frequency parameter to ensure synchronisation with system timing, essential for accurate localisation.
  • Refine GNSS factor and loop closure: Correct GNSS factor loop and adjust loop closure topic names to ensure robust localisation and prevent data processing errors. GPS factors ( F g p s r ) have been added to the PGO module.
  • Add support for global map publishing and search for initial pose: Include functions for loading and publishing global maps, alongside initial pose search using scan context descriptors (SCD), to improve mapping capabilities.
  • Implement node management: Add functionality for node termination, restart, and action server with feedback, allowing dynamic control and monitoring of the SLAM process.
  • Incorporate covariance monitoring and odometry information: Integrate covariance thresholds and odometry covariance data to enhance state estimation reliability.
  • Update graph management and localisation modes: Add save/load functions for factor graphs and introduce various localisation modes to support flexible and adaptable mapping solutions.
  • Enable global map service and consistent naming conventions: Integrate a global map service and standardise naming and frames for compatibility with the MRS UAV System, facilitating a unified spatial understanding across the swarm.
These key components allow to identify the following potential strengths for forestry drone applications:
  • Enhanced Coverage: By utilising multiple drones, DCL-SLAM can cover larger areas more efficiently than single-robot SLAM systems. This is crucial for extensive forest mapping and monitoring tasks.
  • Improved Accuracy: The collaborative nature of DCL-SLAM allows for the sharing and fusion of data from multiple sources, leading to more accurate and robust localisation and mapping.
  • Scalability: DCL-SLAM’s distributed architecture makes it highly scalable, allowing for the addition of more drones to the network without significant performance degradation.
  • Resilience to Environmental Changes: The framework’s ability to integrate data from multiple drones helps to quickly adapt to dynamic environmental changes, such as moving obstacles and varying lighting conditions in forests.
  • High Accuracy and Robustness: The integration of LiDAR and IMU data provides by LIO-SAM ensures high accuracy in pose estimation, even in environments with limited GNSS signals, such as dense forests.

3.3. Formation Flight

Formation flight, a critical aspect of swarm robotics, involves coordinating multiple drones to fly in a structured and cohesive manner. This capability is particularly relevant in forestry applications, where synchronised operations can significantly enhance the efficiency and effectiveness of various tasks. Formation flight allows for optimised coverage of large forest areas, enabling drones to perform simultaneous data collection, monitoring, and mapping. By maintaining precise relative positions, drones can avoid collisions, distribute tasks dynamically, and improve the overall robustness of the swarm operation. This coordinated approach not only increases the accuracy of the data collected, but also enhances the resilience of the swarm to environmental disturbances and individual drone failures. The ability to perform formation flight is thus a key capability for advanced forestry management practices, facilitating tasks, such as biomass estimation, canopy monitoring, and forest health assessment with greater precision and efficiency.

3.3.1. Swarm Formation

The Swarm Formation framework, developed by Quan et al. [8], is an advanced method for ensuring collision-free trajectory generation for formation flights in dense environments. The framework is designed to navigate swarms of drones in a prescribed formation, leveraging optimisation-based strategies to manage both formation maintenance and obstacle avoidance simultaneously.
The Swarm Formation operates on the principle of distributed swarm trajectory optimisation. It introduces a novel differentiable metric to quantify the overall similarity distance between formations. This metric, based on graph Laplacians, assesses the difference between the current and desired formations, allowing for precise formation control while maintaining flexibility in the presence of obstacles. The framework uses polynomial trajectories for spatial-temporal planning, which are optimised to minimise collision penalties and maintain formation integrity.
In the Quan’s framework, a formation of N robots is modeled by an undirected graph G = ( V , E ) , where V = { 1 , 2 , , N } is the set of vertices, and E V × V is the set of edges. In the graph G, the vertex i represents the i-th robot with position vector r i = [ x i , y i , z i ] R 3 , defined in (1). The formation graph G is complete, which means the following:
  • An edge e i j E connecting vertices i V and j V which implies that robots i and j can measure the geometric distance between each other;
  • All vertices are interconnected;
  • Each edge of the graph G is associated with a non-negative weight, which is given by
    w i j = r i r j 2 , ( i , j ) E ,
    where · denotes the Euclidean norm.
Using the edge weights, the adjacency matrix A R N × N and the degree matrix D R N × N of the graph G are defined. The corresponding Laplacian matrix is then computed as
L = D A .
Based on these matrices, the symmetric normalized Laplacian matrix of the graph G is given by
L ^ = D 1 / 2 L D 1 / 2 = I D 1 / 2 A D 1 / 2 ,
where I R N × N is the identity matrix.
The normalized Laplacian matrix L ^ encapsulates information about the structure of the graph. By normalizing the graph Laplacian with the degree matrix, this representation becomes invariant to scaling, rotation, and translation of the formation.
To achieve the desired swarm formation, Quan et al. propose a formation similarity distance metric as
f = L ^ L ^ des F 2 = tr { ( L ^ L ^ des ) ( L ^ L ^ des ) } ,
where tr { · } denotes the trace of a matrix, L ^ is the symmetric normalized Laplacian of the current swarm formation, and L ^ des is the symmetric normalized Laplacian of the desired formation. The Frobenius norm · F is used in this distance metric.
Key components of the Swarm Formation framework include the following:
  • Formation Similarity Metric: This metric uses undirected graphs to represent the formation of drones, with vertices representing individual drones and edges representing the distances between them. The Laplacian matrix of the graph [52] is used to measure the similarity between the current and desired formations, ensuring that the swarm maintains its shape while avoiding obstacles.
  • Optimisation Framework: The framework formulates the trajectory generation as an unconstrained optimisation problem. The cost function includes terms for control effort, total time, obstacle avoidance, formation similarity, swarm reciprocal avoidance, dynamic feasibility, and uniform distribution of constraint points. The optimisation process simultaneously balances these factors to generate collision-free trajectories that preserve the formation.
  • Distributed Architecture: The framework is designed for distributed implementation, where each drone independently calculates its trajectory based on local information and shared data from other drones. This approach enhances scalability and robustness, allowing the swarm to adapt to dynamic environments and maintain formation in real time.

3.3.2. Swarm-MRS Bridge

Despite the advantages presented by Quan et al. [8], initializing grid maps in large environments presents several challenges that can impact efficiency and performance.
One major drawback is the high memory consumption required to store the large number of cells, particularly when fine resolution is needed to capture detailed features. This often necessitates reducing the grid resolution, which can lead to the loss of critical details, such as small obstacles or fine terrain variations.
The initialization process itself can be time-consuming, and updating the grid during runtime may result in delays, hindering real-time operations. Furthermore, in large environments, much of the grid may remain sparse or underutilized, leading to inefficiencies in storage and computation.
Scalability is another concern, as the grid size grows quadratically in 2D or cubically in 3D, making it impractical to handle extremely large areas without advanced techniques. Additionally, defining appropriate grid boundaries is challenging in dynamic or unknown environments; insufficient boundaries may require costly reinitialization, while overly large ones waste resources.
Finally, path planning and navigation algorithms operating on large grids face increased computational complexity due to the expanded search space, which can slow down decision-making. To mitigate these issues, a more scalable approach leveraging the advanced features offered by the MRS UAV system has been chosen.
The MRS system’s capabilities for distributed coordination and path planning were utilized, enabling optimized drone missions by replacing traditional grid mapping with octomaps. This approach provided greater flexibility and efficiency when operating across heterogeneous terrains.
One of the standout features of the MRS system that was integrated into the swarm formation was the Octomap obstacle avoidance planner. The Octomap framework provides a 3D occupancy grid map that is updated in real time, making it ideal for dynamic environments such as forests, where obstacles, like trees, branches, and changing terrain features are prevalent. By using octomaps (Figure 4a), each drone in the swarm is able to autonomously detect and avoid obstacles during flight, ensuring safety and minimising the risk of collision.
Based on a leading drone and the metric of formation similarity distance between it and the other drones defined in (7), this feature is fundamental to allowing them to operate in close proximity to each other, maintaining an efficient flight path, which is essential for covering large areas in the forest with minimal overlap and maximum data collection (Figure 4b).
Moreover, the MRS UAV system facilitated inter-drone communication and coordination, enabling the drones in the swarm to share critical information about their positions, obstacles, and mission statuses. Through this coordination, the system could dynamically adjust drone paths in response to real-time environmental changes. If a drone encountered an unforeseen obstacle or a flight path became obstructed, the swarm could adapt by rerouting affected drones and redistributing tasks across other agents. This autonomous adaptability, paired with the system’s advanced obstacle avoidance and path planning, enhanced the overall robustness and scalability of the multi-drone operations, making it well-suited for large-scale forestry mapping applications. By integrating MRS system features, it was able to maintain effective coverage and ensure smooth operation across the entire forest site without the limitations imposed by grid map-based swarm planning.

3.4. Global Map Service

In collaborative robotics, especially when employing multiple robots equipped with SLAM systems, constructing a unified global map from individual local maps is essential for effective team-based navigation and task coordination. In this work, we have developed a service that aggregates these local maps and builds a comprehensive global map using the Iterative Closest Point (ICP) method [53] (Figure 5). The ICP algorithm is a widely used technique in robotics for point cloud registration, where the goal is to align two sets of spatial points (in this case, overlapping regions from different local maps) by minimising the distance between corresponding points.
The service begins by collecting the local maps generated by each robot. Each local map typically consists of point clouds representing the environment explored by an individual robot. Once the maps are received, the ICP method is applied iteratively to align these point clouds. The process involves an initial guess of the relative pose (position and orientation) between two overlapping maps, followed by matching points from one map to the nearest points in the other map. The algorithm then calculates a transformation (a combination of rotation and translation) that minimises the overall distance between these matched points.
The result is a globally consistent map that integrates information from all robots, significantly enhancing the understanding of the environment. Using ICP ensures precise alignment, which is crucial for accurate map merging, particularly in environments with high structural complexity or noise in sensor data, Figure 5b.

3.5. Flight Control Optimisation

Optimising flight control is crucial for the effective deployment of autonomous drones in forestry applications, where complex and dynamic environments present significant challenges. In such settings, drones must navigate through dense vegetation, variable terrain, and changing weather conditions, requiring precise and adaptive control mechanisms. Flight control optimisation involves fine-tuning various control parameters to enhance stability, responsiveness, and energy efficiency, ensuring that drones can perform tasks, such as mapping, monitoring, and data collection with high accuracy and reliability. The following subsections will delve into the specific aspects of flight control optimisation, including the identification of key control parameters, the application of Particle Swarm Optimisation (PSO) techniques [54], and the development of an auto-tuning algorithm to dynamically adjust those parameters of the physical drone in real-time.

3.5.1. Control Parameters

Optimising flight control parameters is a critical aspect of ensuring the stable and efficient operation of drones, particularly in the complex environments typical of forestry applications. These parameters include various gains within the control algorithms, which need to be finely tuned to adapt to dynamic conditions and maintain precise navigation. In the context of the MRS UAV System framework, various controllers are available to manage these parameters, each offering different advantages based on the application requirements. For our multi-drone PoC, we have selected the SE(3) controller [55] due to its robustness and compatibility with our chosen odometry estimator.
The SE(3) controller, or special Euclidean group in three dimensions, is a sophisticated control framework that integrates position and orientation control into a unified structure [55]. This theoretical foundation allows for precise manipulation of the drone’s pose in 3D space, which is essential for navigating the dense and cluttered environments found in forests. The SE(3) controller’s ability to handle complex manoeuvrers and maintain stability under varying conditions makes it particularly suitable for our application. Additionally, the choice of DCL-SAM as the odometry estimator complements the SE(3) controller by providing accurate and real-time pose estimation, which is crucial for the controller to perform optimally.
Therefore, a certain level of detail on the key control parameters associated with the SE(3) controller is required to better understand their roles in achieving optimal flight performance. It is based upon the SE(3) geometric tracking feedback [55] with the addition of disturbance. The desired force is defined as [6] compensation.
f d = m e k p e p + m e k v e v + m e r ¨ d + m e g e ^ 3 + d w 1 1 0 + d b 1 1 0 ,
where f d is the desired thrust force produced by the controller defined as f d = f d b ^ 3 , m e [ kg ] is the estimated drone mass, r ¨ d ms 2 is the desired acceleration, g ms 2 is the magnitude of the gravitational acceleration. k p are the position gains, k v are the velocity gains, and e v = r ˙ r ˙ d is the velocity control error. The disturbance terms, d w , d b ms 2 , are integrals of the apparent force acting on the drone in the body frame and in the world frame. They are obtained as follows:
d w = 0 t k i w e w d τ , d b = 0 t k i b R ( τ ) T e w d τ ,
where k i w and k i b are the body integral gains and the world integral gains, respectively, and e w = r r d is the control error in the world frame.

3.5.2. Particle Swarm Optimisation (PSO)

PSO is a powerful computational method inspired by the social behaviour of birds flocking or fish schooling, which is used to solve optimisation problems by iteratively improving candidate solutions with respect to a given measure of quality [56]. Due to its simplicity, flexibility, and effectiveness in finding optimal solutions in complex, multi-dimensional spaces, PSO has been successfully applied to a wide range of problems, leading to the development of numerous variants over the years [54,57]. In the context of flight control optimisation for autonomous drones, PSO is particularly effective due to its ability to navigate and optimise within the highly complex and non-linear parameters mathematical space which is characteristic of drone flight dynamics.
PSO works by having a swarm of particles (candidate solutions) that move through the search space. Each particle adjusts its position based on its own experience and the experience of neighbouring particles, effectively balancing exploration and exploitation to converge on an optimal solution. This approach is well-suited for tuning the control parameters of the SE(3) controller, where the search space can be vast and the relationships between parameters highly non-linear.
In the PSO algorithm, the position x i [ t ] and velocity v i [ t ] of each particle are updated at each iteration based on the following general discrete equations:
v i [ t + 1 ] = ω v i [ t ] + c 1 r 1 ( p i x i [ t ] ) + c 2 r 2 ( g x i [ t ] ) ,
x i [ t + 1 ] = x i [ t ] + v i [ t + 1 ] ,
where ω is the inertia weight, c 1 and c 2 are cognitive and social coefficients, r 1 and r 2 are random numbers uniformly distributed in [ 0 , 1 ] , p i is the best known position of particle i, and g is the best known global position among all particles.
In applying PSO to optimise the flight control parameters of our multi-drone system, we define a search space based on the key control parameters identified in the previous section. This includes all the previously presented parameters critical to achieving stable and efficient flight. As n p a r = 11 , this leads to an 11-dimensional search space, wherein each particle represents a solution in the search space. This implies that each particle i in the PSO algorithm is characterised by an 11-dimensional position vector x i [ t ] = [ x i 1 [ t ] , x i 2 [ t ] , , x i 11 [ t ] ] and an 11-dimensional velocity vector v i [ t ] = [ v i 1 [ t ] , v i 2 [ t ] , , v i 11 [ t ] ] .
The fitness function, which guides the PSO algorithm, is designed to evaluate the performance of each parameter set in terms of flight stability, energy efficiency, and responsiveness to environmental changes. In this work, the fitness function is represented by the root mean squared error (RMSE) between the position estimated by the DCL-SLAM estimator p estimated (Section 3.2) and the desired (targeted) position p desired . Therefore, the fitness function f can be mathematically expressed as follows:
f = 1 n i = 1 n p desired , i p estimated , i 2 ,
where n is the number of sample points, which may vary depending on the duration of each experimental assessment of individual particles (see next section).
As such an optimisation process is envisaged to be carried out on a physical drone within real-world experiments; however, a specific auto-tuning algorithm was designed to ensure that safety procedures are in place, being further addressed in the next section.

3.5.3. Auto-Tuning Algorithm

The proposed auto-tuning algorithm leverages PSO to automatically adjust key control parameters of the drone’s SE(3) flight controller, enhancing flight performance by minimising the RMSE between the desired and actual drone positions during take-off (Equation (12)). The PSO algorithm iteratively refines the control parameters by evaluating the fitness of each particle (parameter set) through real-time drone flight tests.
Algorithm 1 presents the pseudocode of the auto-tuning algorithm, highlighting the key steps involved in the process. The algorithm starts by initialising roscore (http://wiki.ros.org/roscore, accessed on 20 January 2025) and defining the control parameters and their respective search spaces. It then sets the desired drone output and configures the PSO parameters, including population size, number of iterations, and inertia coefficients. Each particle’s position and velocity are updated in every iteration based on the fitness evaluations, which are derived from the drone’s flight performance, which leads to changes in the search space, i.e., key parameters. This automated approach not only ensures optimal parameter settings for improved flight stability and efficiency, but also significantly reduces the time and effort required for manual tuning.
The implementation of PSO for auto-tuning the SE(3) flight controller represents a key innovation in this study. This approach ensures that the auto-tuning algorithm dynamically adapts the flight controller to maintain optimal performance even under the challenging and variable conditions of real-world environments.
Algorithm 1 Auto-tuning algorithm for drone flight control using PSO
  1:
Initialise ROS network and clients for setting parameters and subscribing to pose
  2:
Define number of parameters ( n p a r ) and their names
  3:
Retrieve current parameter values from ROS service
  4:
Define search space ( x m i n , x m a x ) for each parameter
  5:
Set desired output (target 3D position) for the drone
  6:
Initialise PSO parameters: population size, iterations, c 1 , c 2 , w
  7:
Initialise swarm positions and velocities within the search space
  8:
Evaluate initial fitness f (Equation (12)) for each particle
  9:
Identify global best position (g) based on fitness f
10:
for each iteration i do
11:
   Update particle velocities and positions
12:
   for each particle j do
13:
      Set drone controller parameters based on particle position (as ROS Service)
14:
      Perform drone flight test and retrieve real output (as ROS subscriber)
15:
      Calculate fitness f (Equation (12)) between real and desired outputs
16:
      if current fitness < best fitness then
17:
      Update particle’s best position and best fitness
18:
      end if
19:
      if current fitness < global best fitness then
20:
      Update global best position
21:
      end if
22:
    end for
23:
    Log global best fitness for current iteration
24:
    if stagnation criteria met then
25:
      Break
26:
    end if
27:
end for
28:
Finalise by storing optimal parameters and shutting down ROS network

3.6. Wireless Communication

The nimbro_network (https://github.com/AIS-Bonn/nimbro_network, accessed on 20 January 2025) ROS package suite enables robust transport of ROS topics and services over high-latency, low-quality networks, originally developed for the DLR SpaceBotCup and further tested in the DARPA Robotics Challenge, where it proved highly reliable. Unlike ROS’s default network layer, which struggles with delays, lacks compression, and requires multiple handshakes for service calls, nimbro_network addresses these limitations by supporting both TCP and UDP protocols, optional BZip2 compression, automatic topic discovery, and rate-limiting. It also introduces advanced features like Forward Error Correction for UDP, a low-latency TCP option, and additional nodes for logging, TF snapshots, and H.264 camera transport. While it offers a powerful alternative to ROS’s built-in networking, users seeking auto-discovery or job scheduling might also consider alternatives like rocon (https://wiki.ros.org/rocon, accessed on 20 January 2025) or multimaster_fkie (https://fkie.github.io/multimaster_fkie/, accessed on 20 January 2025).

4. Experimental Results

The experimental results presented in this section aim to validate the effectiveness and robustness of the proposed multi-drone system in real-world forestry applications. By conducting field experiments, we assess the performance of our system across various key aspects, including flight control optimisation and collaborative forestry mapping. These experiments were designed to test the capabilities of our system in terms of stability, accuracy, and efficiency, while navigating and mapping complex forest environments. The following subsections provide detailed descriptions of the drone configuration used, the experimental protocols adopted, and the results obtained from these field tests, offering insights into the practical viability and potential improvements for the proposed system.

4.1. Drone Configuration: Scout v3

Scout v3, presented in Figure 6, represents an evolution from Scout v2 ([58]) by replacing the pair of stereo cameras (ZED-X) with a single 3D LiDAR (LS LIDAR C16). This change was deliberate, informed by extensive performance evaluations of various visual- and LiDAR-based algorithms under forestry settings, where LiDAR solutions generally outperformed visual ones. The main reasons for this are the homogeneity of forest environments, which affects cameras more significantly due to variable lighting conditions and visual occlusions caused by dense foliage. In contrast, LiDAR provides a more consistent performance with accurate distance measurements and point cloud generation, crucial for effective mapping and navigation in such complex environments. Scout v3 combines 3D LiDAR, IMU, and GNSS with RTK to deliver geolocated 3D maps as point clouds and a reliable odometry estimation using LIO-SAM (single-drone) and DCL-SLAM (multi-drone).

Technical Specifications

The Scout v3 drone is designed for outdoors tasks, combining advanced technologies for autonomy, sensing, communication, and durability. Below are the detailed technical specifications of the Scout v3, and summarised in Table 1.
Energy Autonomy: It is powered by two 6S LiPo batteries, each with a voltage of 22.2 V (6S) and a capacity of 4500 mAh, providing a flight time of approximately 12 min. For extended missions, the energy autonomy can be increased by switching to a single 6S LiPo battery with a capacity of 20,000 mAh, which boosts the flight time to nearly 18 min. This flexibility allows operators to choose the optimal battery configuration based on mission requirements, balancing flight time and payload capacity.
Sensing Payload: It is equipped with a robust suite of sensing payloads to capture detailed data for mapping, surveying, and inspection tasks. As previously stated, the primary sensor is a 16-channel 3D LiDAR capable of scanning at 320,000 points per second, offering high-resolution 3D imaging. Additionally, it includes a 9 DoF IMU for precise motion tracking and a GNSS-RTK (Real-Time Kinematic) system for centimetre-level positioning accuracy. For additional versatility, an optional back camera, such as a GoPro, can be added for capturing high-quality video and imagery.
Communication Technologies: It is equipped with multiple communication technologies to ensure reliable data transmission. It supports Dual-Band Wireless-AC WiFi 5 (802.11ac) with a maximum data transfer rate of 867 Mbps within a 50-m range. For longer-range operation, it also features 2.4 GHz radio communication for remote control with a maximum range of up to 4000 m in line of sight. Additionally, the drone includes an LTE Cat4 mobile router, enabling it to operate over greater distances by leveraging cellular networks. For enhanced security and reliability, it integrates ZeroTier, a software-defined network that functions as a VPN. However, unlike traditional VPNs, which focus on routing connectivity through a server for remote access, ZeroTier creates secure peer-to-peer networks that can be used to improve both the reliability and safety of drone operations. This ensures that communication between the drone and ground control is encrypted, reducing the risk of interference and unauthorised access, while enhancing overall operational security.
ROS Integration: It is powered by a NUC10i7FNKN mini-PC, which is equipped with a 10th generation Intel Core i7 processor (6 cores at 4.70 GHz), ensuring smooth performance and real-time processing capabilities for demanding tasks. It runs on the Linux based operating system, Ubuntu 20.04, utilising ROS Noetic Ninjemys, an open-source robotics framework that supports seamless integration with various robotic tools and applications. Additionally, it integrates MAVROS (https://github.com/mavlink/mavros, accessed on 20 January 2025) with the PX4 flight control system, enabling the drone to leverage the MAVLink communication protocol for enhanced control and monitoring. MAVROS acts as a bridge between ROS and PX4, allowing operators to control and communicate with the drone using high-level ROS commands while accessing low-level flight control features provided by the PX4. This integration provides a powerful, flexible framework for autonomous flight and real-time data collection, with full support for mission planning, telemetry, and sensor data streaming.
Durability and Manoeuvrability: It features a hexacopter frame design, measuring 800 mm in diameter with an X-shape configuration, ensuring excellent stability and manoeuvrability. The frame is made from carbon fibre, which provides a lightweight durable structure, allowing the drone to carry its payloads efficiently while maintaining high performance. The motors are rated KV380, offering sufficient thrust to support its 5 kg weight, which improves the stability of the drone even under winds up to 20 km/h, making it suitable for moderate outdoor conditions. Additionally, the stackable design of the frame offers versatility, enabling drone customisation and enhancing its capability for carrying extra sensors, cameras, or other components.

4.2. Flight Control Optimisation

The flight control optimisation (FCO) experiments are designed to evaluate and enhance the performance of the drones within the complex and dynamic environments of forests. These experiments follow the approach presented in Section 3.5, focusing on fine-tuning the control parameters to achieve stable and efficient flight, ensuring that the drones can navigate through dense vegetation and varied terrain with high precision. By leveraging the PSO-based algorithm (Algorithm 1), we aim to automatically adjust these parameters to optimise flight stability, energy efficiency, and responsiveness to environmental changes. Figure 7 shows the architecture of the PSO-based tuning procedure. The following sections detail the experimental protocol used for flight control optimisation and present the results and discussion of the findings.

4.2.1. FCO Experimental Protocol

To evaluate and optimise the flight control parameters of the drones, we adopted a meticulous experimental protocol designed to ensure safety and the effective convergence of the PSO-based algorithm (Algorithm 1). The protocol involves a series of steps to systematically assess and fine-tune the control parameters, thereby enhancing the drone’s stability and performance in forestry environments. The following steps outline the experimental protocol:
I.
Initial Setup:
  • Select an appropriate test environment with minimal external disturbances and obstacles.
  • Ensure all safety measures are in place, including the presence of safety personnel and emergency procedures.
  • Prepare a drone by fully charging the batteries and verifying the proper functioning of all sensors and communication systems.
  • Perform pre-flight checks to ensure the drone is ready for take-off, including verifying the integrity of the propulsion system, control surfaces, and sensor calibration.
II.
Execution of Iterations:
  • For each iteration, follow these steps:
    (a)
    Take-off: Safely take off the drone and bring it to a designated hover position at a predefined altitude.
    (b)
    Evaluation of Particles: Running the algorithm online, to sequentially evaluate the solutions of all particles within the population. Each particle updates the control parameters accordingly. The drone attempts to maintain a stable hover while executing the control parameters associated with each particle.
    (c)
    Record best particle: Record the best known global position g among all particles, i.e., the one with the best fitness function f.
    (d)
    Landing: After evaluating all particles, safely land the drone and power it down.
III.
Battery Swapping/Recharging:
  • Swap the depleted battery with a fully charged one or recharge the battery to ensure uninterrupted operation for the next iteration.
  • Allow the drone to cool down if necessary to prevent overheating.
IV.
Iteration Resumption:
  • Resume the operation for the next iteration by repeating steps 2 and 3 until the algorithm converges to an optimal solution or the predefined number of iterations is reached.

4.2.2. Results and Discussion

During preliminary tests, the drone exhibited significant instability when using the initial reference values for the control gains, derived from the MRS gain called “supersoft”, making effective operation impractical. To address this issue, the gains were deliberately reduced to very low values as a starting point for the optimisation process. This conservative approach prioritised stability over immediate performance, allowing the system to iteratively adjust and improve through the PSO process. The optimisation was able to improve overall flight dynamics and precision in the autonomous multi-drone system. Table 2 and Table 3 present, respectively, the PSO parameters used in the FCO experiment and initial reference values as well as the optimised control parameter values obtained through the adopted process.
Optimization of generation costs can be seen in Figure 8b. The Particle Swarm Optimization optimization result graph shows how it performs by minimising the RMSE between the desired and actual drone positions during take-off (Figure 8a). After a few particle generations, the cost value stabilises. When the stagnation value N s t g is reached, the optimisation process ends, providing the gains for the controller.
The results from field experiments demonstrate the efficacy of the proposed PSO-based auto-tuning algorithm in enhancing flight control across various operational scenarios. By minimising RMSE, the optimisation algorithm achieved significant improvements in trajectory tracking and stabilization, enabling the drones to maintain precise navigation even in complex forest terrains.

4.3. Collaborative Forestry Mapping

This section outlines the multi-drone PoC developed for efficient forest monitoring and analysis. The next sections provide a detailed account of the study area, the description of the experimental protocol employed using three Scout v3 drones, named α , β , and γ , and the key findings from the multi-drone operations, analysing the effectiveness, accuracy, and scalability of the system.

4.3.1. Forest Site Description

The selected forest site for this study is located in Alfena (Valongo, Portugal), characterized by a heterogeneous mix of vegetation types and topographic valley characteristics (Figure 9a). Spanning approximately 0.4 hectares, the site includes mature trees interspersed with younger growth and open clearings. The area features varying elevations ranging from 159 to 164 m above sea level, with trees reaching an average height of approximately 21 m and ground vegetation around 1.8 m. This presents unique challenges for both data collection and drone operation.
The experimental area was selected based on its representativeness of common forestry challenges and its adherence to reforestation guidelines, which require a minimum distance of 5 m between trees. The site exhibits approximately 22% of the expected forestry density under these guidelines, making it adjusted for initial deployment tests. Its rich biodiversity includes various biomass species with distinct physical morphology, such as diverse tree branch topologies, varying leaf densities, branching spacings, and ground vegetation types. This ecological diversity encompasses coniferous and deciduous species, as well as underbrush with a low density of biomass, which reflects the diverse structural and ecological features of the area.
This combination of factors makes the site particularly suitable for assessing the multi-drone proof of concept (PoC), aimed at evaluating the capability of navigating and mapping with multiple drones. Moreover, the ecological diversity and structural complexity of the area enhance its relevance for foreseen forestry management applications, such as forestry inventory tasks like cataloguing plants and fruits, assessing tree and vegetation density, and analysing ground vegetation in specific locations. The site’s characteristics ensure that the results of this study, while initially focused on the specific experimental area, will gradually converge with the maturation of the developments to be broadly applicable to similar ecosystems worldwide.

4.3.2. Experimental Protocol

The experimental protocol for collaborative forestry mapping was designed to evaluate the effectiveness of a multi-drone system in acquiring high-resolution spatial data for 3D mapping. The protocol involved a systematic series of steps, encompassing mission planning, software configurations, field deployment, data acquisition, and post-processing as follows next:
I.
Initial Setup:
  • As a safety mechanism, every drone was equipped with a radio receiver running an open-source radio control 2.4 GHz link connected to the lowest-level FCU. A single centralised radio control was used as a transmitter to broadcast safety commands, which was utilised solely for triggering the take-off and executing emergency landings in case of any malfunction or unexpected issue.
  • Based on the MRS UAV System, a set of predefined launch files was configured to streamline the deployment of all necessary ROS packages, ensuring seamless integration and initialisation of the multi-drone system functionalities.
II.
Mission planning:
  • A collaborative flight strategy was devised using waypoint-based (Figure 9b) path planning to distribute tasks among drones while ensuring minimal overlap and avoiding potential collisions.
III.
Software configurations:
  • The system utilised a fleet of three Scout v3 drones (drones α , β , and γ ), using the LiDAR to generate a detailed 3D representation of the forest structure, which was further processed using DCL-LIO-SLAM and MRS OctoMap to create an efficient volumetric map for path planning, GNSS for georeferencing, and communication modules for real-time data sharing and coordination.
IV.
Field deployment:
  • A multi-agent framework facilitates autonomous task distribution and path optimisation, while a wireless network enables data transmission between the drones;
  • Field operations were conducted using a swarm formation (Section 3.3.1) to optimise coverage and efficiency during the forestry mapping missions. The drones should try to maintain a triangle formation and keep a height difference between them of 2 m. The drone α was chosen as the leader of the formation.
V.
Data Acquisition:
  • During the data acquisition phase, each drone recorded its own set of data in a standardised ROS bag file format, capturing synchronised streams from the onboard sensors. These bag files served as comprehensive logs of the drones’ flight activities and sensor outputs. The decentralised recording approach ensured that data from each drone were securely stored locally, reducing the risk of data loss during multi-drone operations.
VI.
Post-processing:
  • Following the missions, the bag files were retrieved and analysed in post-processing. This analysis involved extracting key metrics, such as point cloud data for forest structure. The use of individual bag files enabled detailed performance evaluations for each drone and facilitated the integration of datasets to construct a cohesive and high-resolution map of the forest site.

4.3.3. Results and Discussion

The field experiments were conducted to evaluate the performance and effectiveness of the proposed multi-drone system in real-world forest environments. The results highlight the system’s ability to autonomously navigate and map complex terrains while trying to maintain the coordinated behaviour of the swarm (see video of experiments (https://youtu.be/wbh3oNImZIA, accessed on 20 January 2025) and Figure 10).
The assessment focused on qualitative observations of system performance and metrics in key areas, such as localisation, mapping accuracy, swarm coordination, and flight stability. Field observations revealed that the drones were able to autonomously navigate the forest environment, effectively avoiding obstacles and maintaining formation. DCL-LIO-SAM enabled the generation of detailed forest maps, capturing terrain features with discernible precision despite environmental challenges, such as dense vegetation and uneven ground. Figure 11 illustrates the progressive mapping of the environment by a single drone during our field experiment. This figure captures four distinct moments in time, demonstrating how the drone incrementally integrates newly captured features into a cohesive map. The purpose of this figure is to showcase the mapping process as the drone explores the area, highlighting the system’s capability for real-time and progressive map construction.
The use of DCL-LIO-SLAM has enabled the swarm of drones to collaboratively map and localise within unknown environments. The approach operates effectively under constraints of limited bandwidth and communication range, minimising communication overheads between drones. This facilitates efficient loop closure detection between swarm members without the need for continuous connectivity. Inter-loop closures extend the concept of loop closures to interactions between multiple robots, where one drone identifies overlap in mapping data with another, enhancing the collective accuracy and consistency of the shared map. In contrast, the effectiveness of intra-loop closures depends on the characteristics of the environment being explored, which influence a drone’s ability to recognize previously visited locations. The first inter-loop closures between two pairs of drones are shown in Figure 12.
In the field experiment, the keyframes were selected when the position and rotation changes are compared to the previous and exceed the thresholds, respectively, 1 m and 0.2 rad. The LiDAR-Iris descriptor was applied with default parameters to detect inter-loop closures as it presented the best performance in the work of Zhong et al. [7].
During the field experiments, the number of inter-loop closures between three drones was quantified to evaluate the collaborative performance of the system. A bar plot (Figure 13) illustrating the frequency of inter-loop closures revealed differences in the contributions of each drone to the overall mapping process.
The drones are identical in both hardware and software configurations, including their sensors and SLAM frameworks. Therefore, these variations are influenced by factors such as the drones’ trajectories, their relative spatial coverage, and the density of overlapping regions in the forest environment. Drone pairs with overlapping paths exhibited higher counts of inter-loop closures, indicating stronger collaborative mapping performance. Conversely, drones that operated in less overlapping regions naturally reported fewer inter-loop closures.
As can be seen in Figure 13, the drone β exhibited higher counts of inter-loop closure as it is in the middle in relation to the height of the formation. It is important to emphasise that the rotation of drones is invariant because the encoding function is independent of the viewpoints, and the laser scan is projected to its bird-eye view which is divided into angular and radial resolution as proposed by Zhong in [7], i.e., 360 and 80. After that, the relative pose transformation of the loop closure candidate is evaluated using the ICP method, and both robots initiate the distributed pose graph optimization.
These results highlight the importance of trajectory planning and coverage optimisation in distributed collaborative SLAM, demonstrating how swarm dynamics influence mapping precision and inter-drone coordination.
The results presented in Table 4 compare the loop closures achieved by the three drones ( α , β , and γ ) in terms of total loop closures, inter-loop closures, and intra-loop closures. Drone α achieved a total of 259 loop closures, with 119 (45.9%) being inter-loop closures, reflecting collaboration with other drones, and 140 (54.1%) being intra-loop closures, corresponding to its own revisited locations. Drone β exhibited a strong emphasis on inter-loop closures, with 159 out of 215 total closures (73.9%), demonstrating significant collaboration within the swarm, while only 56 (26.1%) were intra-loop closures. Conversely, drone γ showed a more balanced distribution, achieving 218 total closures, of which 84 (38.5%) were inter-loop closures and 134 (61.6%) were intra-loop closures. These results highlight variations in the collaborative mapping contributions of the drones, with β showing the highest proportion of inter-loop closures, indicating its active role in inter-drone collaboration during the mapping process.
Swarm coordination was assessed using three key metrics: formation similarity distance metric proposed by Quan et al. [8], the area of the x y -plane covered by the swarm, and the 3D formation area. Figure 14a shows the variations in the swarm formations over six different moments and the trajectories of each drone.
The x y -plane area metric (A) evaluates the 2D spatial distribution of the drones, providing insights into the swarm’s efficiency in covering the environment while maintaining formation integrity. It was calculated using the Shoelace formula, also known as Gauss’s area formula, a mathematical technique for determining the area of a polygon defined by a series of vertices in a two-dimensional plane. Using the positions of three drones, the area of the triangular formation is calculated with the following equation:
A = 1 2 ( x α ( y β y γ ) + x β ( y γ y α ) ) + x γ ( y α y α ) ,
where the position of i-th robot is given by the vector r i = [ x i , y i , z i ] R 3 , and i = { α , β , γ } .
On the other hand, the 3D formation area metric (B) quantifies the 3D configuration of the formation by calculating the surface area enclosing the drones within the operational volume. Given the position of three drones, this metric is determined by the following equation:
B = 1 2 ( r β r α ) × ( r γ r α ) .
The reference values were calculated taking drone α as the leader and the x y -plane distance between them of 4 m and a height distance between 2 m and 4 m between drones for improved safety. In the following Table 5, it summarize the results:
Although these metrics provide valuable insights into the system’s performance, they are not compared against alternative methods as the primary focus of this study is the feasibility and functionality of the proposed proof of concept. The worst result occurred at point E, due to the complexity of the environment, such as the presence of bushes and trees, which disrupted the formation.
Finally, the global map was built by integrating mapping data from all drones, demonstrating the effectiveness of the collaborative SLAM framework in combining individual contributions into a unified representation (Figure 15).
The results show that the contributions of each drone to the global map are similar, reflecting the close proximity in which the drones operated during the mapping task. This overlap not only ensured redundancy in certain regions, improving the robustness and accuracy of the final map, but also reinforced the system’s ability to align and integrate data from different drones despite variations in specific loop closure patterns or trajectories.

5. Implications and Future Work

The results obtained from the multi-drone PoC for forest mapping highlight the significant potential of integrating advanced localisation, mapping, control, and swarm formation frameworks within a robust and collaborative UAV system. This section delves into the broader implications of our findings and outlines the future work required to enhance and extend the capabilities of our approach. We begin by discussing the integration of our work with the OPENSWARM code base, emphasising the critical capabilities that facilitate seamless operation within this framework. We then explore potential improvements that could further optimise system performance, followed by a discussion on the broader applications of our technology beyond forestry. Finally, we propose future research directions that could address the current limitations and open new avenues for the deployment of collaborative drone systems in various domains.

5.1. Integration with OPENSWARM Code Base

The integration of our multi-drone PoC system with the OPENSWARM code (https://github.com/openswarm-eu, accessed on 20 January 2025) base is pivotal in leveraging the full potential of our framework within a collaborative and scalable ecosystem. This integration focuses on critical capabilities that facilitate seamless operation, enhance performance, and ensure robustness. Below, we outline the main code base contributions to be considered, their corresponding roles within the OPENSWARM architecture, and how the capabilities could be developed to seamlessly integrate these within the herein presented ecosystem:
  • Application Performance Monitoring: Monitoring and managing the performance of applications running on the drones is essential for optimising their operation. We propose to enable the OPENSWARM telemetry framework through existing packages, such as rosmon (https://github.com/xqms/rosmon, accessed on 20 January 2025) or diagnostics (https://github.com/ros/diagnostics, accessed on 20 January 2025), to create a comprehensive monitoring solution.
  • Collaborative AI Task Distribution: Efficient task distribution among swarm members is critical for maximising collective performance. In this work, we integrate a Swarm Formation package as a preliminary step. Future improvements could include implementing self-organising maps (SOMs) for coordinating task allocation dynamically based on real-time environmental inputs and drone capabilities [59].
  • Energy-Aware Collaborative Task Scheduler: Efficient scheduling of tasks based on available energy resources ensures optimal resource utilisation across the swarm. This capability will be achieved through an energy-aware task scheduler that evaluates each drone’s current energy levels in real time, dynamically assigning tasks that align with each unit’s energy availability and the mission’s overall energy budget.
These capabilities collectively contribute to a robust and scalable multi-drone system, aligning with the OPENSWARM PoC’s goals and ensuring efficient operation in forestry and other applications. The integration of these components will be thoroughly developed, tested, and validated through our experimental protocols, further enhancing the capabilities and reliability of the OPENSWARM ecosystem.

5.2. Potential Improvements

While the proposed multi-drone system for forestry applications demonstrates a robust integration of forestry drone navigation, collaborative mapping, and swarm robotics, there are several areas where further improvements can enhance its performance, scalability, and robustness. Below, we outline some potential improvements and a course of action to achieve them.
  • Enhanced Sensor Fusion Techniques: Although the integration of LiDAR, IMU, and GNSS data has proven effective in operating in forestry scenarios, further advancements in sensor fusion techniques could improve the system’s accuracy and robustness. Incorporating additional sensors, such as multispectral cameras, can provide richer environmental data, aiding in tasks, such as species identification and health monitoring of forest areas. Advanced algorithms, like deep learning-based sensor fusion, could be explored to optimally combine data from these diverse sensors.
  • Adaptive Control Algorithms: While the current SE(3) control framework offers robust performance, adaptive control algorithms could be developed to dynamically adjust control parameters in real time based on environmental conditions and mission requirements. Techniques like model predictive control and reinforcement learning could be employed to enhance the adaptability and efficiency of the flight control system, ensuring stable operation even in rapidly changing forest environments.
  • Advanced Collaborative Algorithms: Enhancing the collaborative capabilities of the drone swarm through more sophisticated algorithms for task allocation and coordination can improve the overall efficiency and effectiveness of the system. Exploring game theory-based approaches, market-based mechanisms, and advanced AI-driven strategies for dynamic task assignment can optimise resource utilisation and mission success rates.
  • Robust Communication Networks: While dependable WiFi networking has been integrated into the system, it has not been the scope of this work. Therefore, further improvements in communication protocols and ad-hoc infrastructure could enhance reliability and data throughput. Research into multi-hop communication networks, adaptive bandwidth allocation, and fault-tolerant communication strategies could ensure continuous and stable connectivity among drones, even in challenging environments with high signal attenuation.
  • Scalability and Modular Architecture: To support larger and more diverse drone swarms, the system architecture could be made more modular and scalable. Implementing microservice architecture and containerisation (e.g., using Singularity as an approach already adopted by the authors) could facilitate easier deployment, scaling, and maintenance of the system. This approach would allow new functionalities to be added or existing ones to be updated without disrupting the overall system operation.
  • User-Friendly Interfaces and Tools: Developing intuitive user interfaces and tools for mission planning, monitoring, and analysis could improve the usability of the system for end-users, such as forest managers and researchers. Graphical interfaces for swarm programming, real-time visualisation of drone data, and easy-to-use mission planning tools would enhance user experience and operational efficiency.
  • Migrating from ROS1 to ROS2: This is particularly beneficial for multi-robot systems due to several key improvements and considerations [60]. ROS2 eliminates the need for the central ROS Master node, enabling inherently distributed communication. This avoids the bottlenecks and single points of failure that hinder scalability and robustness in ROS1, which this work overcomes by adopting the nimbro_network package. Additionally, ROS2 integrates Data Distribution Service (DDS) as its communication middleware, which offers enhanced support for real-time systems, and quality-of-service configurations. However, this migration would not come without the extra effort to improve data exchange under high-latency, low-quality, dynamic networks, which ROS2 is still not prepared for as it currently stands.
By addressing these potential improvements, the proposed multi-drone system can achieve higher levels of performance, reliability, and scalability, ultimately contributing to more effective and sustainable forest-management practices. Future research and development efforts should focus on these areas to unlock the full potential of autonomous drone swarms in forestry and beyond.

5.3. Broader Applications

While the primary focus of the proposed multi-drone system is on forestry applications, the underlying capabilities and architecture have the potential to be adapted and applied across a wide range of other sectors. The versatility and robustness of the technologies developed can address various challenges in different domains, extending the impact and benefits of the system. Below, we explore some broader applications of the multi-drone system:
  • Agriculture: In agriculture, the multi-drone system can be utilised for precision farming practices [61]. Drones equipped with multispectral cameras can monitor crop health, detect pest infestations, and assess soil moisture levels. The system’s ability to perform collaborative mapping and real-time data analysis can optimise irrigation schedules, apply fertilisers precisely, and improve overall crop yields.
  • Disaster Response and Management: During natural disasters, such as wildfires, earthquakes, and floods, the multi-drone system can play a critical role in search and rescue operations [62]. Drones can quickly cover large areas to locate survivors, assess damage, and provide real-time situational awareness to emergency responders. The collaborative capabilities of the system ensure efficient coordination and resource allocation, enhancing the effectiveness of disaster response efforts.
  • Environmental Monitoring and Conservation: Beyond forestry, the multi-drone system can be employed for broader environmental monitoring and conservation efforts [63]. Drones can track wildlife populations, monitor habitats, and assess the health of ecosystems. They can also detect and track illegal activities, such as poaching and logging, providing crucial data for conservation authorities. The system’s real-time mapping and monitoring capabilities can support efforts to protect endangered species and preserve biodiversity.
  • Urban Planning and Infrastructure Inspection: In urban areas, the multi-drone system can assist in planning and managing infrastructure [64]. Drones can conduct detailed inspections of buildings, bridges, and other critical infrastructure, identifying structural issues and maintenance needs. The system’s ability to create accurate 3D maps can aid in urban planning and development projects, ensuring efficient use of space and resources. Drones can also monitor traffic patterns and air quality, contributing to smarter and more sustainable cities.
  • Logistics and Supply Chain Management: The logistics industry can benefit from the multi-drone system through enhanced delivery and inventory management [65]. Drones can autonomously transport goods between warehouses, distribution centres, and customer locations, reducing delivery times and operational costs. In large warehouses, drones can perform inventory checks and manage stock levels, improving supply chain efficiency. The collaborative nature of the system ensures seamless coordination and optimisation of logistics operations.
  • Energy Sector: In the energy sector, drones can be used for inspecting and maintaining infrastructure such as power lines, wind turbines, and solar panels [66]. The system’s ability to navigate complex environments and perform detailed inspections can help identify faults and schedule maintenance activities, reducing downtime and improving the reliability of energy supply. Drones can also monitor pipeline networks and detect leaks, enhancing the safety and efficiency of energy distribution.
  • Public Safety and Law Enforcement: Law enforcement agencies can leverage the multi-drone system for surveillance, crowd monitoring, and crime prevention [67]. Drones can provide real-time aerial views of public events, monitor traffic violations, and assist in tracking suspects. The system’s advanced capabilities in real-time data analysis and collaborative operations can enhance the effectiveness of public safety measures and support law enforcement activities.
The proposed multi-drone system’s robust architecture and advanced capabilities make it highly adaptable for a variety of broader applications beyond forestry. Future work should explore these applications in detail, tailoring the system to meet specific needs and challenges in each domain.

6. Conclusions

This study presents a comprehensive approach to leveraging multi-drone systems for efficient and autonomous forest mapping. The integration of advanced frameworks, namely DCL-LIO-SAM for collaborative multi-drone localisation and mapping, the MRS UAV System for robust control and navigation, and the Swarm Formation framework to enable drones to maintain a given formation during operations, highlights the potential of these technologies to revolutionise forestry management. By addressing key challenges, such as real-time data processing and interoperability, the proposed system demonstrates significant advancements in the field of autonomous forestry drones, for foreseen forestry management applications, such as forestry inventory tasks like cataloging plants and fruits, assessing tree and vegetation density, and analysing ground vegetation in specific locations.
Field experiments validate the system’s capability to operate as a coordinated swarm, achieving accurate mapping and robust navigation in real-world forest conditions. The results highlight the potential of integrating distributed collaborative SLAM with adaptive flight control for scalable and sustainable forestry management applications. Future work will explore expanding the system’s scalability and adaptability, including enhanced communication protocols and more complex mission scenarios, to further contribute to the development of autonomous multi-robot systems.

Author Contributions

Conceptualisation, A.G.A. and M.S.C.; methodology, A.G.A. and C.A.P.P.; software, A.G.A. and C.A.P.P.; validation, A.G.A.; investigation, A.G.A., M.S.C. and R.P.R.; resources, A.A and M.S.C.; writing—original draft preparation, A.G.A. and C.P; writing—review and editing, M.S.C. and R.P.R.; visualisation, A.G.A. and C.A.P.P.; supervision, M.S.C. and R.P.R.; project administration, M.S.C.; funding acquisition, M.S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This document is issued within the frame and for the purpose of the OpenSwarm project. This project has received funding from the European Union’s Horizon Europe Framework Programme under Grant Agreement No. 101093046. Views and opinions expressed are however those of the author(s) only and the European Commission is not responsible for any use that may be made of the information it contains.

Data Availability Statement

The data and source code supporting the findings of this study are publicly available. The dataset used in this research has been deposited in the Zenodo repository and can be accessed at https://zenodo.org/records/14701641?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6ImU4MGQzMGJhLTVkZmUtNGMzNS05M2U5LTJhMGEwNzBiZWUwYSIsImRhdGEiOnt9LCJyYW5kb20iOiIwNTg3MTExYjk4MGNmMTBjOTczZmYxZjZkOThmYTUwZCJ9.pl41ckDv1NoEayNntY7d-2j2M0vUtlagSBxTEsLRvYt5gju4N1p_lbACJKuHsH85VUc43U07qHb5w2DU0E4vfA (accessed on 20 January 2025). Additionally, the source code developed is available on GitHub at https://github.com/openswarm-eu (accessed on 20 January 2025). These resources provide full transparency and reproducibility of the reported results.

Acknowledgments

This work was supported by Ingeniarius, Ltd. who provided the necessary resources (drones and other hardware).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

CSLAMCollaborative Simultaneous Localization and Mapping
DARPADefense Advanced Research Projects Agency
DDSData Distribution Service
DCL-SLAMDistributed Collaborative LiDAR Simultaneous Localization and Mapping
DLIODirect LiDAR-Inertial Odometry
DLODirect LiDAR Odometry
FCUFlight Control Unit
FCOFlight Controller Optimizer
GICPGeneralized Iterative Closest Point
GNSSGlobal Navigation Satellite System
ICPIterative Closest Point
iG-LIOIncremental Generalized Iterative Closest Point LiDAR-Inertial Odometry
IMUInertial Measurement Unit
LiDARLight Detection and Ranging
Light-LOAMLightweight LiDAR Odometry and Mapping
LoRaLong Range (wireless communication protocol)
LIOLiDAR-Inertial Odometry
LIO-SAMLiDAR-Inertial Odometry Smoothing and Mapping
LSD-SLAMLarge-Scale Direct Simultaneous Localization and Mapping
MAVROSMAVLink Extending ROS
MPCModel Predictive Control
MRS UAVMulti-Robot System Unmanned Aerial Vehicle
OPENSWARMOpen-source collaborative framework for Swarm Robotics
ORB-SLAMOriented FAST and Rotated BRIEF Simultaneous Localization and Mapping
PGOPose Graph Optimization
PoCProof of Concept
PSOParticle Swarm Optimization
PX4Open-source Flight Stack for UAVs
RMSERoot Mean Square Error
ROSRobot Operating System
RTKReal-Time Kinematic (positioning)
SCDScan Context Descriptors
SLAMSimultaneous Localization and Mapping
SOMSelf-Organizing Map
SR-LIVOSparse Robust LiDAR-Inertial Visual Odometry
TCPTransmission Control Protocol
UDPUser Datagram Protocol
UAVUnmanned Aerial Vehicle

References

  1. Partheepan, S.; Sanati, F.; Hassan, J. Autonomous unmanned aerial vehicles in bushfire management: Challenges and opportunities. Drones 2023, 7, 47. [Google Scholar] [CrossRef]
  2. Merz, M.; Pedro, D.; Skliros, V.; Bergenhem, C.; Himanka, M.; Houge, T.; Matos-Carvalho, J.P.; Lundkvist, H.; Cürüklü, B.; Hamrén, R.; et al. Autonomous UAS-based agriculture applications: General overview and relevant European case studies. Drones 2022, 6, 128. [Google Scholar] [CrossRef]
  3. Zhang, J.; Hu, J.; Lian, J.; Fan, Z.; Ouyang, X.; Ye, W. Seeing the forest from drones: Testing the potential of lightweight drones as a tool for long-term forest monitoring. Biol. Conserv. 2016, 198, 60–69. [Google Scholar] [CrossRef]
  4. Ferreira, J.F.; Portugal, D.; Andrada, M.E.; Machado, P.; Rocha, R.P.; Peixoto, P. Sensing and Artificial Perception for Robots in Precision Forestry: A Survey. Robotics 2023, 12, 139. [Google Scholar] [CrossRef]
  5. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.J.; Tiede, D.; Seifert, T. UAV-based forest health monitoring: A systematic review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
  6. Baca, T.; Petrlik, M.; Vrba, M.; Spurny, V.; Penicka, R.; Hert, D.; Saska, M. The MRS UAV system: Pushing the frontiers of reproducible research, real-world deployment, and education with autonomous unmanned aerial vehicles. J. Intell. Robot. Syst. 2021, 102, 26. [Google Scholar] [CrossRef]
  7. Zhong, S.; Qi, Y.; Chen, Z.; Wu, J.; Chen, H.; Liu, M. DCL-SLAM: A distributed collaborative LIDAR SLAM framework for a robotic swarm. IEEE Sens. J. 2023, 24, 4786–4797. [Google Scholar] [CrossRef]
  8. Quan, L.; Yin, L.; Xu, C.; Gao, F. Distributed swarm trajectory optimization for formation flight in dense environments. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4979–4985. [Google Scholar]
  9. Ahmed, F.; Mohanta, J.; Keshari, A.; Yadav, P.S. Recent advances in unmanned aerial vehicles: A review. Arab. J. Sci. Eng. 2022, 47, 7963–7984. [Google Scholar] [CrossRef]
  10. Banu, T.P.; Borlea, G.F.; Banu, C. The use of drones in forestry. J. Environ. Sci. Eng. B 2016, 5, 557–562. [Google Scholar]
  11. Trybała, P.; Morelli, L.; Remondino, F.; Farrand, L.; Couceiro, M.S. Under-Canopy Drone 3D Surveys for Wild Fruit Hotspot Mapping. Drones 2024, 8, 577. [Google Scholar] [CrossRef]
  12. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  13. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
  14. Mokroš, M.; Tabačák, M.; Lieskovskỳ, M.; Fabrika, M. Unmanned aerial vehicle use for wood chips pile volume estimation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 953–956. [Google Scholar] [CrossRef]
  15. Kinaneva, D.; Hristov, G.; Raychev, J.; Zahariev, P. Early forest fire detection using drones and artificial intelligence. In Proceedings of the 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2019; pp. 1060–1065. [Google Scholar]
  16. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2012, 3, 397–404. [Google Scholar] [CrossRef]
  17. Chen, K.; Lopez, B.T.; Agha-mohammadi, A.a.; Mehta, A. Direct LiDAR Odometry: Fast Localization With Dense Point Clouds. IEEE Robot. Autom. Lett. 2022, 7, 2000–2007. [Google Scholar] [CrossRef]
  18. Chen, K.; Nemiroff, R.; Lopez, B.T. Direct LiDAR-Inertial Odometry: Lightweight LIO with Continuous-Time Motion Correction. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 3983–3989. [Google Scholar] [CrossRef]
  19. Bai, C.; Xiao, T.; Chen, Y.; Wang, H.; Zhang, F.; Gao, X. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels. IEEE Robot. Autom. Lett. 2022, 7, 4861–4868. [Google Scholar] [CrossRef]
  20. He, D.; Xu, W.; Chen, N.; Kong, F.; Yuan, C.; Zhang, F. Point-LIO: Robust High-Bandwidth Light Detection and Ranging Inertial Odometry. Adv. Intell. Syst. 2023, 5, 2200459. [Google Scholar] [CrossRef]
  21. Chen, Z.; Xu, Y.; Yuan, S.; Xie, L. iG-LIO: An Incremental GICP-based Tightly-coupled LiDAR-inertial Odometry. IEEE Robot. Autom. Lett. 2024, 9, 1883–1890. [Google Scholar] [CrossRef]
  22. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  23. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  24. Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
  25. Yi, S.; Lyu, Y.; Hua, L.; Pan, Q.; Zhao, C. Light-LOAM: A Lightweight LiDAR Odometry and Mapping Based on Graph-Matching. IEEE Robot. Autom. Lett. 2024, 9, 3219–3226. [Google Scholar] [CrossRef]
  26. Yuan, Z.; Deng, J.; Ming, R.; Lang, F. SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping With Sweep Reconstruction. IEEE Robot. Autom. Lett. 2024, 9, 5110–5117. [Google Scholar] [CrossRef]
  27. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar]
  28. Garforth, J.; Webb, B. Visual appearance analysis of forest scenes for monocular SLAM. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 1794–1800. [Google Scholar]
  29. Tomaštík, J.; Saloň, Š.; Tunák, D.; Chudỳ, F.; Kardoš, M. Tango in forests—An initial experience of the use of the new Google technology in connection with forest inventory tasks. Comput. Electron. Agric. 2017, 141, 109–117. [Google Scholar] [CrossRef]
  30. Chang, Y.; Ebadi, K.; Denniston, C.E.; Ginting, M.F.; Rosinol, A.; Reinke, A.; Palieri, M.; Shi, J.; Chatterjee, A.; Morrell, B.; et al. LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments. arXiv 2022, arXiv:2205.13135. [Google Scholar] [CrossRef]
  31. Trujillo, J.C.; Munguia, R.; Guerra, E.; Grau, A. Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments. Sensors 2018, 18, 1351. [Google Scholar] [CrossRef]
  32. Mahdoui, N.; Frémont, V.; Natalizio, E. Communicating multi-uav system for cooperative SLAM-based exploration. J. Intell. Robot. Syst. 2020, 98, 325–343. [Google Scholar] [CrossRef]
  33. Schmuck, P.; Chli, M. Multi-UAV collaborative monocular SLAM. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3863–3870. [Google Scholar]
  34. Tian, Y.; Liu, K.; Ok, K.; Tran, L.; Allen, D.; Roy, N.; How, J.P. Search and rescue under the forest canopy using multiple UAVs. Int. J. Robot. Res. 2020, 39, 1201–1221. [Google Scholar] [CrossRef]
  35. Cui, J.Q.; Lai, S.; Dong, X.; Liu, P.; Chen, B.M.; Lee, T.H. Autonomous navigation of UAV in forest. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 726–733. [Google Scholar]
  36. Balamurugan, G.; Valarmathi, J.; Naidu, V. Survey on UAV navigation in GPS denied environments. In Proceedings of the 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), Paralakhemundi, India, 3–5 October 2016; pp. 198–204. [Google Scholar]
  37. Zhilenkov, A.A.; Chernyi, S.G.; Sokolov, S.S.; Nyrkov, A.P. Intelligent autonomous navigation system for UAV in randomly changing environmental conditions. J. Intell. Fuzzy Syst. 2020, 38, 6619–6625. [Google Scholar] [CrossRef]
  38. Wang, C.; Liu, P.; Zhang, T.; Sun, J. The adaptive vortex search algorithm of optimal path planning for forest fire rescue UAV. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 400–403. [Google Scholar]
  39. Pestana, J.; Mellado-Bataller, I.; Sanchez-Lopez, J.L.; Fu, C.; Mondragón, I.F.; Campoy, P. A general purpose configurable controller for indoors and outdoors GPS-denied navigation for multirotor unmanned aerial vehicles. J. Intell. Robot. Syst. 2014, 73, 387–400. [Google Scholar] [CrossRef]
  40. Brambilla, M.; Ferrante, E.; Birattari, M.; Dorigo, M. Swarm robotics: A review from the swarm engineering perspective. Swarm Intell. 2013, 7, 1–41. [Google Scholar] [CrossRef]
  41. Şahin, E. Swarm robotics: From sources of inspiration to domains of application. In Proceedings of the International Workshop on Swarm Robotics, Santa Monica, CA, USA, 17 July 2004; pp. 10–20. [Google Scholar]
  42. Alsammak, I.L.H.; Mahmoud, M.A.; Aris, H.; AlKilabi, M.; Mahdi, M.N. The use of swarms of unmanned aerial vehicles in mitigating area coverage challenges of forest-fire-extinguishing activities: A systematic literature review. Forests 2022, 13, 811. [Google Scholar] [CrossRef]
  43. Weinstein, A.; Cho, A.; Loianno, G.; Kumar, V. Visual Inertial Odometry Swarm: An Autonomous Swarm of Vision-Based Quadrotors. IEEE Robot. Autom. Lett. 2018, 3, 1801–1807. [Google Scholar] [CrossRef]
  44. Madridano, Á.; Al-Kaff, A.; Flores, P.; Martín, D.; de la Escalera, A. Software architecture for autonomous and coordinated navigation of uav swarms in forest and urban firefighting. Appl. Sci. 2021, 11, 1258. [Google Scholar] [CrossRef]
  45. Hu, J.; Niu, H.; Carrasco, J.; Lennox, B.; Arvin, F. Fault-tolerant cooperative navigation of networked UAV swarms for forest fire monitoring. Aerosp. Sci. Technol. 2022, 123, 107494. [Google Scholar] [CrossRef]
  46. Gupta, L.; Jain, R.; Vaszkun, G. Survey of important issues in UAV communication networks. IEEE Commun. Surv. Tutorials 2015, 18, 1123–1152. [Google Scholar] [CrossRef]
  47. Venturini, F.; Mason, F.; Pase, F.; Chiariotti, F.; Testolin, A.; Zanella, A.; Zorzi, M. Distributed Reinforcement Learning for Flexible and Efficient UAV Swarm Control. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 955–969. [Google Scholar] [CrossRef]
  48. Peng, Q.; Wu, H.; Xue, R. Review of Dynamic Task Allocation Methods for UAV Swarms Oriented to Ground Targets. Complex Syst. Model. Simul. 2021, 1, 163–175. [Google Scholar] [CrossRef]
  49. Mohan, M.; Richardson, G.; Gopan, G.; Aghai, M.M.; Bajaj, S.; Galgamuwa, G.P.; Vastaranta, M.; Arachchige, P.S.P.; Amorós, L.; Corte, A.P.D.; et al. UAV-supported forest regeneration: Current trends, challenges and implications. Remote Sens. 2021, 13, 2596. [Google Scholar] [CrossRef]
  50. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
  51. Carlone, L.; Aragues, R.; Castellanos, J.A.; Bona, B. A fast and accurate approximation for planar pose graph optimization. Int. J. Robot. Res. 2014, 33, 965–987. [Google Scholar] [CrossRef]
  52. Bondy, J.A.; Murty, U.S.R. Graph Theory; Springer: New York, NY, USA, 2008. [Google Scholar]
  53. Zhang, Z. Iterative closest point (ICP). In Computer Vision: A Reference Guide; Springer: New York, NY, USA, 2021; pp. 718–720. [Google Scholar]
  54. Couceiro, M.; Ghamisi, P. Fractional-Order Darwinian PSO. In Fractional Order Darwinian Particle Swarm Optimization: Applications and Evaluation of an Evolutionary Algorithm; Springer International Publishing: Cham, Switzerland, 2016; pp. 11–20. [Google Scholar] [CrossRef]
  55. Lee, T.; Leok, M.; McClamroch, N.H. Geometric tracking control of a quadrotor UAV on SE(3). In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 5420–5425. [Google Scholar] [CrossRef]
  56. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  57. Imran, M.; Hashim, R.; Khalid, N.E.A. An Overview of Particle Swarm Optimization Variants. Procedia Eng. 2013, 53, 491–496. [Google Scholar] [CrossRef]
  58. Trybala, P.; Morelli, L.; Remondino, F.; Couceiro, M.S. Towards robotization of foraging wild fruits: A multi-camera drone for mapping berries under canopy. In Proceedings of the European Robotics Forum (ERF 2024), Rimini, Italy, 13–15 March 2024. [Google Scholar]
  59. Chen, B.W.; Rho, S. Autonomous tactical deployment of the UAV array using self-organizing swarm intelligence. IEEE Consum. Electron. Mag. 2020, 9, 52–56. [Google Scholar] [CrossRef]
  60. Castilho, J.P.C. ROS 2.0–Study and Evaluation of ROS 2 in comparison with ROS 1. Master’s Thesis, University of Coimbra, Coimbra, Portugal, 2022. [Google Scholar]
  61. Hermanus, D.R.; Supangkat, S.H.; Hidayat, F. Designing an Advanced Situational Awareness Platform Using Intelligent Multi-Drones for Smart Farming Towards Agriculture 5.0. In Proceedings of the 2024 International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 4–5 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
  62. Mohd Daud, S.M.S.; Mohd Yusof, M.Y.P.; Heo, C.C.; Khoo, L.S.; Chainchel Singh, M.K.; Mahmood, M.S.; Nawawi, H. Applications of drone in disaster management: A scoping review. Sci. Justice 2022, 62, 30–42. [Google Scholar] [CrossRef]
  63. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.; Pajuelo Madrigal, V.; Mallinis, G.; Ben Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the use of unmanned aerial systems for environmental monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef]
  64. Muñoz, J.; López, B.; Quevedo, F.; Monje, C.A.; Garrido, S.; Moreno, L.E. Multi UAV Coverage Path Planning in Urban Environments. Sensors 2021, 21, 7365. [Google Scholar] [CrossRef] [PubMed]
  65. Kim, K.; Kim, S.; Kim, J.; Jung, H. Drone-Assisted Multimodal Logistics: Trends and Research Issues. Drones 2024, 8, 468. [Google Scholar] [CrossRef]
  66. Shafiee, M.; Zhou, Z.; Mei, L.; Dinmohammadi, F.; Karama, J.; Flynn, D. Unmanned Aerial Drones for Inspection of Offshore Wind Turbines: A Mission-Critical Failure Analysis. Robotics 2021, 10, 26. [Google Scholar] [CrossRef]
  67. Quamar, M.M.; Al-Ramadan, B.; Khan, K.; Shafiullah, M.; El Ferik, S. Advancements and Applications of Drone-Integrated Geographic Information System Technology—A Review. Remote Sens. 2023, 15, 5039. [Google Scholar] [CrossRef]
Figure 1. System architecture proposed for the multi-drone PoC system.
Figure 1. System architecture proposed for the multi-drone PoC system.
Drones 09 00080 g001
Figure 2. The world frame w = e 1 , e 2 , e 3 , in which the position and orientation of the drone is expressed by translation r = [ x , y , z ] T and rotation R ( ϕ , θ , ψ ) to the body frame b = b 1 , b 2 , b 3 . The drone heading vector h , which is a projection of b ^ 1 to the plane span e ^ 1 , e ^ 2 , forms the heading angle η = atan 2 b ^ 1 e ^ 2 , b ^ 1 e ^ 1 = atan 2 h ( 2 ) , h ( 1 ) , figure based on [6].
Figure 2. The world frame w = e 1 , e 2 , e 3 , in which the position and orientation of the drone is expressed by translation r = [ x , y , z ] T and rotation R ( ϕ , θ , ψ ) to the body frame b = b 1 , b 2 , b 3 . The drone heading vector h , which is a projection of b ^ 1 to the plane span e ^ 1 , e ^ 2 , forms the heading angle η = atan 2 b ^ 1 e ^ 2 , b ^ 1 e ^ 1 = atan 2 h ( 2 ) , h ( 1 ) , figure based on [6].
Drones 09 00080 g002
Figure 3. Figure based on [6]. The filters simultaneously estimate the states and can be switched or selected by user/arbiter.
Figure 3. Figure based on [6]. The filters simultaneously estimate the states and can be switched or selected by user/arbiter.
Drones 09 00080 g003
Figure 4. Simulation of the swarm formation in the forest environment. Together, these visualisations demonstrate the effectiveness of the simulation tools in evaluating and refining the Multi-Drone PoC system prior to field experiments. (a) Octomap representation of a simulated forest environment in Gazebo, shown using a color gradient that varies with height. (b) Representation of swarm formation in the simulation environment. The three colors (pink, green, and blue) in small dot points represent the global maps of each drone. The square markers indicate the reference samples from the Octomap planner’s desired trajectory. The trajectory, represented by vectors, corresponds to the outputs of the MPC tracker. Additionally, the actual paths of each drone are depicted as solid lines. Finally, the solid red lines represent the current swarm formation shape.
Figure 4. Simulation of the swarm formation in the forest environment. Together, these visualisations demonstrate the effectiveness of the simulation tools in evaluating and refining the Multi-Drone PoC system prior to field experiments. (a) Octomap representation of a simulated forest environment in Gazebo, shown using a color gradient that varies with height. (b) Representation of swarm formation in the simulation environment. The three colors (pink, green, and blue) in small dot points represent the global maps of each drone. The square markers indicate the reference samples from the Octomap planner’s desired trajectory. The trajectory, represented by vectors, corresponds to the outputs of the MPC tracker. Additionally, the actual paths of each drone are depicted as solid lines. Finally, the solid red lines represent the current swarm formation shape.
Drones 09 00080 g004
Figure 5. Global map service. (a) Overview of the global map integration process, where local maps from each drone are collected and aligned using the Iterative Closest Point (ICP) algorithm to create a unified global map of the environment. (b) Resulting integrated global map generated by combining the local maps from three drones, namely drone α , drone β , and drone γ , using the ICP algorithm, showcasing the complete coverage of the surveyed area.
Figure 5. Global map service. (a) Overview of the global map integration process, where local maps from each drone are collected and aligned using the Iterative Closest Point (ICP) algorithm to create a unified global map of the environment. (b) Resulting integrated global map generated by combining the local maps from three drones, namely drone α , drone β , and drone γ , using the ICP algorithm, showcasing the complete coverage of the surveyed area.
Drones 09 00080 g005
Figure 6. Scout v3.
Figure 6. Scout v3.
Drones 09 00080 g006
Figure 7. Architecture of the PSO-based tuning procedure for the SE(3) controller. The setup consists of a drone running on ROS for real-time control and state feedback, while a laptop executes the Particle Swarm Optimization (PSO) algorithm in MATLAB. Communication between the drone and the laptop enables iterative tuning of the controller parameters to optimize performance.
Figure 7. Architecture of the PSO-based tuning procedure for the SE(3) controller. The setup consists of a drone running on ROS for real-time control and state feedback, while a laptop executes the Particle Swarm Optimization (PSO) algorithm in MATLAB. Communication between the drone and the laptop enables iterative tuning of the controller parameters to optimize performance.
Drones 09 00080 g007
Figure 8. Flight control optimisation process. (a) Real drone performing PSO-based auto tuning. (b) Particle Swarm Optimization (PSO) convergence graph.
Figure 8. Flight control optimisation process. (a) Real drone performing PSO-based auto tuning. (b) Particle Swarm Optimization (PSO) convergence graph.
Drones 09 00080 g008
Figure 9. Forest site description. (a) A point of view of the forest site from the drone’s perspective. (b) Aerial view of the forest site showcasing the diverse canopy structure, ranging from dense evergreen stands to open clearings.
Figure 9. Forest site description. (a) A point of view of the forest site from the drone’s perspective. (b) Aerial view of the forest site showcasing the diverse canopy structure, ranging from dense evergreen stands to open clearings.
Drones 09 00080 g009aDrones 09 00080 g009b
Figure 10. Images depicting the field experiments in the forest, highlighting the multi-drone system in operation (drone α in red, drone β in green and drone γ in blue).
Figure 10. Images depicting the field experiments in the forest, highlighting the multi-drone system in operation (drone α in red, drone β in green and drone γ in blue).
Drones 09 00080 g010
Figure 11. Progressive mapping of the environment by a single drone at four distinct moments during the field experiment. The figure illustrates the gradual construction of the map, depicted using a color gradient that varies with height, as the drone explores the area. Newly captured features are incrementally integrated into the overall representation.
Figure 11. Progressive mapping of the environment by a single drone at four distinct moments during the field experiment. The figure illustrates the gradual construction of the map, depicted using a color gradient that varies with height, as the drone explores the area. Newly captured features are incrementally integrated into the overall representation.
Drones 09 00080 g011
Figure 12. This figure illustrates the first inter-loop closures between two pairs of drones. These closures are crucial for ensuring cooperative mapping in multi-robot systems, reducing errors that may arise from individual robot uncertainties.
Figure 12. This figure illustrates the first inter-loop closures between two pairs of drones. These closures are crucial for ensuring cooperative mapping in multi-robot systems, reducing errors that may arise from individual robot uncertainties.
Drones 09 00080 g012
Figure 13. This figure presents the frequency of inter-loop closures, revealing differences in the contributions of each drone to the overall mapping process.
Figure 13. This figure presents the frequency of inter-loop closures, revealing differences in the contributions of each drone to the overall mapping process.
Drones 09 00080 g013
Figure 14. Trajectories executed by the drones during real experiments. (a) Variations in swarm formations over six distinct moments and overall trajectories of each individual drone. (b) Overlay global map (represented in red) and trajectories executed by the drones on the forest terrain.
Figure 14. Trajectories executed by the drones during real experiments. (a) Variations in swarm formations over six distinct moments and overall trajectories of each individual drone. (b) Overlay global map (represented in red) and trajectories executed by the drones on the forest terrain.
Drones 09 00080 g014
Figure 15. Maps generated by three drones, namely drone α , drone β , drone γ , and the Global Map created by the Global Map Service. For better visualisation, a height threshold was applied and the number of points was reduced.
Figure 15. Maps generated by three drones, namely drone α , drone β , drone γ , and the Global Map created by the Global Map Service. For better visualisation, a height threshold was applied and the number of points was reduced.
Drones 09 00080 g015
Table 1. Technical specifications of Scout v3.
Table 1. Technical specifications of Scout v3.
Technical SpecificationDescription
Energy autonomy2 LiPo 6S batteries of 22.2 V and 4500 mAh
Sensing payload3D LiDAR (16-channel), 9 DoF IMU, GNSS-RTK, and the possibility to add an optional back camera (e.g., GoPro)
Communication technologiesDual Band Wireless-AC WiFi 5 (802.11ac), radio controller system 2.4 GHz and a cellular communication LTE Cat4 mobile router
ROS integrationRunning Ubuntu 20.04 with ROS Noetic Ninjemys on a NUC10i7FNKN a tenth generation i7 processor, with 6 Cores @ 4.70 GHz
Durability and maneuverabilityCarbon fiber frame hexacopter 800 mm with X shape, motors KV380 and a weight of 5 kg
Table 2. PSO parameters used in the FCO experiments.
Table 2. PSO parameters used in the FCO experiments.
ParameterDescriptionInitial Values
n p a r Number of parameters11
nNumber of sample points1500
N i Population size15
ω Inertia weight1.0
c 1 Cognitive coefficient1.5
c 2 Social coefficient1.5
N g e r Number of generations30
N s t g Stagnation value10
Table 3. Initial reference values, and optimised control parameter values for the SE(3) flight controller using PSO.
Table 3. Initial reference values, and optimised control parameter values for the SE(3) flight controller using PSO.
SE(3) ParameterDescriptionMRS Supersoft ValuesOptimised Values
k p Position x y gain3.01.05
k v Velocity x y gain2.00.73
k a Acceleration x y gain0.30.29
k i b Body x y integral gain0.10.10
k i w World x y integral gain0.10.09
k p z Position z gain15.05.11
k v z Velocity z gain8.03.14
k a z Acceleration z gain1.00.30
k q r p Pitch/roll attitude gain5.01.57
k q y Yaw attitude gain5.01.47
k m Mass Estimator5.00.49
Table 4. Comparison of loop closures.
Table 4. Comparison of loop closures.
DroneLoop ClosureInter LoopIntra Loop
TotalTotal(%)Total(%)
α 25911945.914054.1
β 21515973.95626.1
γ 2188438.513461.6
Table 5. Comparison of drone formations (Figure 14a) over time and location.
Table 5. Comparison of drone formations (Figure 14a) over time and location.
FormationFormation SimilarityMetric AMetric B
Distance (Equation (7))Values (m2)Difference (%)Values (m2)Difference (%)
A0.3717.408.7521.967.65
B0.206.82−57.3812.05−40.93
C0.1214.30−10.6322.289.22
D0.3016.493.0628.6240.29
E0.4826.0262.6336.0176.52
F0.1611.51−28.0624.3519.36
Reference value0.0016.00-20.40-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Araújo, A.G.; Pizzino, C.A.P.; Couceiro, M.S.; Rocha, R.P. A Multi-Drone System Proof of Concept for Forestry Applications. Drones 2025, 9, 80. https://doi.org/10.3390/drones9020080

AMA Style

Araújo AG, Pizzino CAP, Couceiro MS, Rocha RP. A Multi-Drone System Proof of Concept for Forestry Applications. Drones. 2025; 9(2):80. https://doi.org/10.3390/drones9020080

Chicago/Turabian Style

Araújo, André G., Carlos A. P. Pizzino, Micael S. Couceiro, and Rui P. Rocha. 2025. "A Multi-Drone System Proof of Concept for Forestry Applications" Drones 9, no. 2: 80. https://doi.org/10.3390/drones9020080

APA Style

Araújo, A. G., Pizzino, C. A. P., Couceiro, M. S., & Rocha, R. P. (2025). A Multi-Drone System Proof of Concept for Forestry Applications. Drones, 9(2), 80. https://doi.org/10.3390/drones9020080

Article Metrics

Back to TopTop