Next Article in Journal
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
Previous Article in Journal
Complex-Scene SAR Aircraft Recognition Combining Attention Mechanism and Inner Convolution Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence

1
National Key Laboratory of Electromagnetic Energy, Naval University of Engineering, Wuhan 430033, China
2
East Lake Laboratory, Wuhan 430202, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4752; https://doi.org/10.3390/s25154752 (registering DOI)
Submission received: 22 May 2025 / Revised: 26 July 2025 / Accepted: 30 July 2025 / Published: 1 August 2025

Abstract

Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing’s near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified.

1. Introduction

With the rapid development of artificial intelligence and Internet of Things technology, unmanned platforms such as drones and unmanned aerial vehicles have shown great potential in the fields of environmental monitoring, disaster relief, logistics, and distribution [1]. However, with the increase in task complexity, a single unmanned platform has difficulty meeting the needs of large-scale and multi-scenario cooperative operations [2], Yan et al. proposed a collaborative confrontation decision-making method for heterogeneous drone swarms, based on self-play reinforcement learning, which enhanced the intensity of drone confrontation and further optimized maneuvering strategies [3]. Shi et al. verified the autonomous collaborative capabilities of drone swarms in a meta-battlefield domain, demonstrating their situational awareness, intelligent response, and autonomous formation capabilities under complex environments and limited resources [4]. Xue et al. introduced a distributed task assignment algorithm based on coalition formation game theory, which achieved effective task allocation while significantly improving real-time performance and maximizing the efficiency of the swarm [5]. Liu et al. incorporated a quantum decision-making model to address the autonomous decision-making problem in heterogeneous swarms, enhancing the applicability and transferability of drone swarms [6]. However, these studies focus on the level of mission decision-making, and the solution of how to carry out cooperative data processing on unmanned platforms is still insufficient.
In the scenario of cooperative task execution by unmanned platforms, the cooperative data processing capability of unmanned clusters plays a vital role in ensuring the efficient and accurate completion of tasks [7]. Indu et al. proposed a multi-hop data off-loading scheme which proved the feasibility of decentralized data processing and off-loading in UAV networks [8]. Wang et al. used UAV time series data and recurrent neural networks to accurately predict yield at the field scale, providing a basis for timely and precise agricultural decision-making [9]. Wei Min et al. proposed a federal privacy-preserving UAV data collection framework, which improves the route accuracy of UAV path planning, strengthens data security, and reduces carbon emissions by 20% [10]. However, its strict data isolation mechanism exacerbates the problem of data fragmentation (non-independent and identically distributed, Non-IID) in the dynamic networking scenarios of unmanned platforms, significantly reducing the model convergence speed. The above-mentioned schemes for collaborative data processing in unmanned platforms struggle to simultaneously meet the requirements of privacy security, heterogeneity compatibility, computational efficiency, etc.
The Federated Edge Intelligence (FEI) method provides a new idea for cooperative unmanned platform data processing by integrating the privacy protection characteristics of federated learning and the near-source processing ability of edge computing. This technology aims to achieve efficient model training and knowledge sharing in a distributed environment, while reducing communication overhead. It is a frontier research direction in the field of unmanned platform collaborative computing.
Federated learning [11], a distributed machine learning approach, leverages the computational power of different nodes. It enables distributed data storage and processing to accelerate model training and improve performance. After local data training, participants share and aggregate model parameters, not raw data, achieving knowledge sharing and model optimization while ensuring data privacy and security. In an unmanned cluster environment, federated learning supports unmanned platforms to co-train a shared model while protecting data privacy, which can directly transfer model parameters between unmanned platforms instead of raw data, and it can be used to improve the efficiency of data sharing, which reduces the burden of communication and improves the efficiency of intelligent data processing. Federated learning can also resist malicious attacks and protect the security of the model, which plays a key role in the stable operation of unmanned swarms in complex environments.
Existing research on federated learning has primarily focused on addressing the issues of heterogeneity and privacy protection in federated learning. For instance, Sun et al. developed a personalized federated multitask learning algorithm with knowledge distillation, improving each client’s personalized accuracy under heterogeneous models [12]. Zhang et al. proposed a federated class-incremental learning method combining hybrid knowledge distillation in a digital twin environment, enhancing performance of federated class-incremental models in Non-IID data conditions [13]. Liu et al. introduced an adaptive encoder allocation model for better device heterogeneity solutions [14]. Yin et al. proposed a federated local momentum acceleration learning algorithm with attention mechanisms to alleviate data heterogeneity [15]. These algorithms mostly focus on single-dimension heterogeneity optimization, struggling to cope with the complex scenarios where data, device, and model heterogeneities interweave in unmanned platforms. Xiong presented a federated learning algorithm based on device clustering and differential privacy, which effectively enhanced model accuracy and convergence speed [16]. Rodríguez et al. constructed a classification system for adversarial attack and defense methods in federated learning, and proposed a guide for selecting defense methods, thereby providing a comprehensive framework for research on the security of federated learning [17]. Ballhausen et al. proposed the “federated secure computing” architecture, which separates cryptography from business logic through a simple API, supports multiple privacy-preserving computing protocols, and verified its efficiency on Internet of Things (IoT) devices, thus lowering the threshold for the application of privacy-preserving computing [18]. Karras et al. proposed a client-balanced Dirichlet sampling algorithm with probabilistic guarantees to alleviate the problem of oversampling, optimize the data distribution among clients, and thereby achieve more accurate and reliable model training [19].
In the practical application of federated learning for unmanned platforms, it also faces high latency and bandwidth pressure caused by data transmission, making it difficult to meet the strict real-time requirements of complex tasks. Edge computing [20] reduces latency and bandwidth consumption and improves computational efficiency by off-loading computing tasks to an unmanned platform closer to the data source. By collaboratively completing all computing tasks on edge nodes, edge computing not only optimizes task offloading scheduling, but also takes into account the communication cost, real-time computing capacity, and residual energy of each node; thus, the scheduling efficiency and endurance of the unmanned swarm are effectively improved.
The Cloud-Edge-End architecture [21], as a novel distributed computing paradigm, enables the construction of a highly collaborative and intelligent unmanned swarm system. As illustrated in Figure 1, the cloud, based on in-depth data analysis, promptly pushes trained or updated models to the edge and end devices, formulating macro-level strategies and plans for the entire system. The edge, leveraging its proximity to the data source, quickly receives data collected by end devices and utilizes its onboard computational resources to run relatively simple intelligent algorithms and models. The end devices, as the components most directly interfacing with the physical world or users, collect raw data using various sensors and transmit the collected data to the edge or cloud in a timely and accurate manner.
As shown in Figure 2, the main challenges for collaborative data processing on unmanned platforms in a cloud-edge-end architecture are as follows:
(1)
Data heterogeneity [22]: During the process of data collection and processing by large-scale unmanned platforms, due to varying task requirements and environmental conditions, different platforms may gather data with significantly different categories. Additionally, the numbers of samples collected by each platform may be unevenly distributed [23]. These factors result in the data collected by unmanned platforms exhibiting non-independent and identically distributed (Non-IID) characteristics.
(2)
Device heterogeneity: Unmanned platforms vary significantly in terms of computational resources, storage resources, and other aspects. This heterogeneity necessitates that algorithms and models in collaborative operations must adapt to the hardware limitations of different devices to achieve optimal resource allocation and utilization.
(3)
Model heterogeneity [24]: Different unmanned platforms may face diverse task requirements, necessitating the use of varying model architectures or parameter configurations.
(4)
Privacy protection [25]: Traditional data processing methods typically involve transmitting collected data to a centralized data processing center (such as a cloud server) for storage and analysis. During data transmission, due to the openness of the network, data is susceptible to interception by third parties. In this process, data privacy faces significant risks.
The cooperative data processing ability of unmanned platform directly affects its task execution efficiency in a dynamic environment. The existing methods often lead to slow convergence and insufficient generalization ability due to data heterogeneity and equipment differences. In view of the above problems, this paper proposes a federated edge intelligence method (DSIA-FEI) that integrates data sharing and intelligent model aggregation strategies, which significantly improves the accuracy, convergence speed, and stability of unmanned swarms for collaborative data processing. The main contributions and innovations of this paper are as follows:
(1)
This paper proposes a federated edge computing architecture for unmanned platform clusters, integrating technologies such as federated learning and edge computing into the cloud-edge-end paradigm. Compared with traditional federated learning methods, this architecture better adapts to the dynamic and distributed application scenarios of unmanned platforms through three-level collaboration: global scheduling at the cloud layer, distributed processing at the edge layer, and data collection at the terminal layer.
(2)
In the federated edge intelligence framework, to address the issues of data class imbalance and large distribution differences among edge platforms, a privacy-enhanced data sharing mechanism is introduced. While mitigating the problem of data distribution heterogeneity across platforms, this mechanism strengthens data privacy protection by adding random perturbations to data using Gaussian noise.
(3)
This paper proposes an intelligent model screening strategy that combines similarity measurement and loss gradient. The strategy first uses a hierarchical parameter alignment method to map the parameters of heterogeneous models to a unified space; then, according to the similarity coefficient and loss gradient, the local model that is most beneficial to the global model aggregation is selected, which significantly reduces the interference of device and model heterogeneity on the global model aggregation in the federated learning process.

2. Cloud-Edge-End Architecture for Unmanned Platform Swarms

The study, based on the cloud-edge-end collaborative computing paradigm, designs a three-layer federated edge computing architecture for unmanned swarms. This architecture effectively integrates the distributed model training advantages of federated learning with the near-source data processing capabilities of edge computing. While ensuring data privacy and security, it significantly enhances the overall task execution efficiency of the swarm. The specific details of the federated edge intelligence method within this architecture are provided in Section 3.
As shown in Figure 3, the proposed architecture consists of three layers: the central platform, edge platforms, and terminal platforms, forming an efficient and collaborative intelligent computing network. Specifically, on the terminal side, sensor-equipped platform nodes are responsible for environmental perception and data collection, as well as performing necessary data preprocessing operations. On the edge side, deployed edge platforms act as computational relay nodes, not only receiving and processing data transmitted from the terminal layer but also conducting local model training and parameter optimization, enabling near-source data processing. In the cloud layer, the central platform serves as the global coordinator, responsible for aggregating model parameters from various edge nodes and managing the updates and distribution of the global model.
This hierarchical architecture design, through the introduction of a federated edge intelligence framework, achieves multiple advantages:
(1)
At the data level, it ensures the privacy and security of raw data while facilitating cross-platform knowledge sharing.
(2)
At the computational level, localized processing at edge nodes significantly reduces data transmission overhead.
(3)
At the model level, a hierarchical collaborative training mechanism is employed, ensuring both the timeliness and accuracy of model updates.

3. Method of Federated Edge Intelligence

In the distributed model training process of unmanned platform swarms, edge platforms face two primary challenges when training models based on local datasets: First, due to the non-independent and identically distributed (Non-IID) nature of the data [11], there exists a significant discrepancy between local data distribution and the global distribution. Second, the heterogeneity in computational capabilities and model architectures further exacerbates the divergence between local models and the global model. These factors severely impact the convergence speed and final performance of the model.
Existing federated learning methods largely rely on parameter averaging for aggregation, which is based on the implicit assumption of independent and identically distributed (Non-IID) data. This assumption is easily violated in the context of heterogeneous unmanned platform sensor data and diverse task scenarios, leading to model bias. To compensate for this bias, regularization or local fine-tuning is often required, which significantly slows down the convergence speed. Moreover, for edge nodes equipped with heterogeneous models, global aggregation becomes challenging under such circumstances.
To address these challenges, this paper proposes an innovative federated edge intelligence method—data sharing and intelligent model aggregation strategy Federated Edge Intelligence (DSIA-FEI), which integrates data sharing and model similarity measurement. As shown in Figure 4, this method introduces a dynamic data-sharing mechanism, which effectively mitigates the negative impact of non-independent and identically distributed (Non-IID) data. Meanwhile, it performs hierarchical parameter alignment between local and global models and employs a similarity loss gradient-based model selection strategy. This reduces the interference of device and model heterogeneity on global model aggregation, thereby significantly improving the efficiency of distributed training and model performance.

3.1. Data Sharing Strategy

Federated learning relies on local model updates uploaded by each participant to aggregate global models. When the data distribution is heterogeneous, the model will perform well on some participants’ data and poorly on other participants’ data. Because the global model is based on the aggregation of local models, the distribution difference will lead to the model not being able to fully adapt to the data characteristics of all participants. In the process of edge cooperative data acquisition and transmission of unmanned platform clusters, there are often cases of unbalanced data categories and uneven distribution, leading to slow convergence speed, model performance degradation, communication efficiency reduction, and other issues in the subsequent model training. In order to alleviate the problem of heterogeneous data distribution, this paper introduces a data sharing mechanism. The edge collaboration process after using the data sharing mechanism is as follows:
(1)
Data Sampling: In the terminal platforms, data is sampled according to a predefined sharing ratio. Research indicates that model accuracy plateaus when the data-sharing ratio is between 20% and 30% [11]. Beyond this range, further increasing the sharing ratio does not lead to unlimited accuracy gains. Thus, in the subsequent experiments, the sharing ratio was set at 20%. Additionally, data normalization and noise reduction were performed to eliminate redundant components from the original data.
(2)
Data Transmission: The sampled data is transmitted to the data-sharing platform. This process necessitates ensuring the accuracy and efficiency of data transfer.
(3)
Data Integration: The data-sharing platform integrates the received data to form a public dataset. This process involves data cleaning and preprocessing to ensure data quality.
(4)
Data Distribution: The common data in the data sharing platform is randomly sampled and distributed to each edge platform. The edge platforms utilize both the public dataset and their local datasets for model training.
By sharing the public dataset, the data discrepancies among different edge platform are reduced. This enhancement contributes to the stability and reliability of the entire unmanned platform swarm, improving the central platform’s ability to generalize local models from various edge platforms and the local models’ capability to generalize datasets.
When applying data sharing strategies in unmanned platforms, to implement privacy protection, sensitive information (such as UAV IDs, geographical locations, etc.) can be deleted, only retaining features required for model training. Controllable noise is injected into the data, where Gaussian noise adds random perturbations to continuous data. Its probability density function is as follows:
f x = 1 2 π σ 2 e x μ 2 2 σ 2
Specifically, the standard deviation σ serves to control the noise amplitude, which enables the blurring of sensitive information while preserving the data distribution characteristics. However, the noise amplitude needs to be quantitatively regulated according to the signal-to-noise ratio ( S N R ); the calculation method of S N R is as follows:
S N R d B = 10 log 10 P s i g n a l P n o i s e
where S N R denotes the magnitude of the signal-to-noise ratio, P s i g n a l represents the power of the original data, and P n o i s e stands for the power of the Gaussian noise.

3.2. Intelligent Model Aggregation Mechanism

Under the federated learning framework, the significant divergence between local models on edge platforms and the global model introduces adverse effects: Firstly, the participation of low-quality local models in global aggregation degrades the overall performance of the model, compromising the data processing accuracy of the unmanned platform swarm. Secondly, transmitting these inefficient models consumes valuable communication resources, exacerbating the load pressure on resource-constrained platforms [26]. This issue is particularly pronounced in scenarios involving non-independent and identically distributed (Non-IID) data.
From the perspective of model representation, the heterogeneity in local data distributions is directly reflected in the dynamic variations of the model’s weight matrix. The weight matrix exhibits dual characteristics:
  • Numerical Characteristics: The absolute values of the matrix elements reflect the strength of neuronal connections, indicating the model’s focus on different features.
  • Directional Characteristics: Treating the weight matrix as a vector in high-dimensional space, its directional information implicitly captures the learning trends and convergence direction of the model, analogous to semantic direction indicators in vector space models.
To ensure the effectiveness of model aggregation, accurately measuring the similarity between local models and the global model is crucial. However, traditional similarity metrics face limitations in simultaneously capturing both numerical differences and directional characteristics when dealing with high-dimensional neural network weight matrices. This paper proposes a dynamically evolving model similarity calculation method based on the Frobenius norm and cosine similarity. This approach uses the Frobenius norm [27] to quantify numerical differences in the weight matrix and leverages cosine similarity to capture the directional properties of the model, thereby achieving a comprehensive evaluation of model similarity. A weight parameter α is introduced to balance the influence of these two metrics on model similarity.
The similarity coefficient reflects the consistency and correlation between local and global models in the feature space, while the loss gradient of a model captures the update dynamics during training, indicating the direction and magnitude of parameter changes. Significant changes in a local model’s loss function suggest that new, valuable feature patterns have been learned from the local environment.
Accordingly, this paper advances an intelligent model aggregation strategy that combines similarity coefficients and loss gradients. This approach aims to identify local models that contribute positively to global model updates.

3.2.1. Alignment of Hierarchical Parameters

When calculating the similarity coefficient between the global model and the local model or performing global model aggregation, it is necessary to ensure that the model parameter structure matches. However, in the federated edge intelligence scenario for unmanned platform swarms, each unmanned platform node may deploy heterogeneous local models (such as CNN, Resnet, etc.) due to task requirements and resource constraints. Although the overall architecture of the model is different, some layers (such as convolutional layers, fully connected layers) assume similar functions in feature extraction or classification tasks. Therefore, this paper proposes a method of hierarchical parameter alignment, which can be used to improve the accuracy of feature extraction and classification, mapping heterogeneous parameters to a unified space by hierarchical parameter alignment.
The global model of a terminal platform contains K functional layers L = { l 1 , l 2 , , l K } , and each layer l K is defined as a class of functional modules, such as a convolution layer, residual layer, attention layer, etc. Similarly, for the local model M i in the i -th edge platform, its local layer is aligned with the global layer according to its functional similarity.
For the local layer L with similar functions in the i -th edge platform, its parameter dimensions may not be the same, so that it is necessary to map the parametric dimension θ k ( i ) R d k ( i ) of L into the global-level unified parameter space θ k R d k by the parametric mapping function ϕ k i : R d k ( i ) R d k .
For the convolution layer, let the local convolution kernel size be h i × w i and the global convolution layer size be h × w ; then the mapping can be performed by bilinear interpolation, as shown in (3).
θ ~ k i = R e s i z e θ k i ,   h × w
where, for the target location ( x , y ) in the convolution layer, the four adjacent points are ( x 1 , y 1 ) , x 1 , y 2 , ( x 2 , y 1 ) , ( x 2 , y 2 ) , and the bilinear interpolation formula is shown in Formula (4).
θ k ~ x , y = i = 1 2 j = 1 2 θ k x i , y j × 1 x x i × 1 y y j
For the fully connected layer, let the dimension of the local layer weight matrix be m i × n i , the dimension of the global layer weight matrix be m × n , and the mapping be performed by zero-padding or truncation, as shown in Formula (5).
θ k ~ p , q = θ k i p , q , if   p   m i , q n i 0 , otherwise
In the residual layer, the structure of the module includes the convolution of the main path and the jump connection. The parameter mapping needs to align the main path parameters and process the jump connection according to the global model structure. For each convolution layer W c o v of the main path and the size of the jump connection, Formula (3) is also used to align the size of the convolution kernel and the number of channels in the primary path, and the skip connection is aligned using a 1 × 1 convolution:
W c o v ~ = W 1 × 1 × W c o v W s h o r t c u t ~ = W 1 × 1 × W s h o r t c u t
It should be noted that the number of channels is a structural parameter of the model rather than a weight parameter, and will not participate in the subsequent similarity measurement and model aggregation. The number of channels is mapped to keep the model structure consistent so that effective global aggregation can occur.
After parameter mapping, in order to eliminate the difference in the dimension of different local model parameters, the mapped weight parameters are normalized, as shown in Formula (7)–(9).
θ ~ k i = θ ~ k i μ k σ k
μ k = E θ k ~
σ k = E θ k ~ μ k 2

3.2.2. Coefficient of Similarity and Loss Gradient

The Frobenius norm [28,29] can directly quantify the numerical difference between the corresponding elements of two matrices. For the weight matrix, calculating the Frobenius norm of the weight matrix of the global model and the local model can clearly quantify the degree of deviation between the models. The smaller the Frobenius norm is, the higher the similarity between models is. For the global model weight matrix G and the local model weight matrix L i , the Frobenius norm is calculated as shown in Formula (10).
G L i F = 1 k γ i i = 1 k m = 1 M n = 1 N G L i i j 2
where γ i has a value of 0 or 1: 0 indicates that the functional module does not exist in the local model of the edge platform; otherwise, it is 1.
The Frobenius norm only focuses on the numerical difference of the numerical weight matrix, and ignores the consistency or difference information of the weight matrix as a vector set in the direction, which cannot reflect whether the model learning trend is synergistic. The cosine similarity can capture the direction relationship between vectors, and the cosine similarity can be calculated after the weight matrix is expanded into a vector form by row or column, which can reveal the similarity of the model in the direction of feature learning. The higher the cosine similarity of the two weight matrices, the closer their directions in the high-dimensional space, that is, the model’s extraction patterns of data features are similar. After the global model weight matrix G and the local model weight matrix L i are flattened into vectors, the cosine similarity calculation method is as shown in Formula (11).
  c o s < G , L i > = 1 k γ i i = 1 k i = 1 m × n G L i i = 1 m × n G 2 i = 1 m × n L i 2
After calculating the Frobenius norm and cosine similarity of the local and global models, the similarity coefficient of the local and global models is shown in Formula (12).
  S i = α c o s < G , L i > + 1 α 1 G L i F
In Formula (12), S i represents the similarity coefficient between the local model of the i -th edge platform and the global model. A higher value of S i indicates a greater degree of similarity between the models.
In federated learning, the loss gradient reflects the direction and speed of the model’s descent with the current parameters. In the early stage of model training, when the loss gradient is large, there is a large gap between the current parameters of the model and the optimal parameters, and the model needs to quickly adjust the parameters to reduce the loss function value. At this time, the cosine similarity can better measure the directional consistency between the local model and the global model, determine the reasonable update direction, and improve the accuracy of the local model, avoiding the deviation of the update direction from the optimal path caused by excessive attention to the numerical difference of parameters.
As the training proceeds, when the model enters the later fine-tuning stage, the learning direction of each node has become stable, and the loss function trend is stable, so that the model gradually focuses on the Frobenius norm. As the loss function is stable, at this time, the focus of optimization is to ensure the numerical accuracy of weight update, so as to effectively control the numerical influence of local updates on the global model and improve the overall performance of the model.
Therefore, in this paper, the α value is set to be related to the rate of change of the loss function as the iteration of the model gets smaller, which enables the model to better dynamically adjust the similarity coefficient according to the current training situation and task requirements, and improve the performance of the model, as shown in Equation (13).
α t = α t 1 β L t 1 L t L t 1
Here, t represents the current iteration count, T denotes the initially set maximum number of iterations, L t 1 is the value of the loss function in the last iteration, L t is the value of the loss function in the current iteration, and β is used to control the adjustment step size based on the relative change of loss, i.e., L t 1 L t L t 1 . An excessively large β is prone to induce oscillations in the α value, whereas an excessively small β tends to result in a slow convergence rate of the α value.
In subsequent experiments, this study sets the initial value α 0 = 0.8 to ensure the consistency of update directions between the local model and global model in the early stage of model training. Since the relative change in loss, i.e., L t 1 L t L t 1 , always lies within the interval [−1, 1], β is set to 0.05 to ensure that the adjustment step size β L t 1 L t L t 1 is always less than 10% of α t 1 , thereby preventing weight oscillations.
After calculating the similarity coefficient, the computation of the loss gradient for the i -th local model is illustrated in Formula (14).
L i = L i θ θ

3.2.3. Model Aggregation Strategy

After computing each local model’s similarity coefficient S i and loss gradient L i , we present an intelligent model aggregation strategy. Introducing aggregation thresholds for the similarity coefficient and loss gradient ( t h s , t h l ) and a model aggregation pool ( A g g ) enables smart filtering of local models for global aggregation. In each federated learning iteration, the system compares S i with t h s and L i with t h l for each edge platform’s local model. A local model is added to the aggregation pool if its S i and L i exceed their respective thresholds. This mechanism effectively filters out low-quality local models, enhancing the global model’s robustness, convergence stability, and generalization ability.
Studies indicate that the similarity coefficient between local and global models is around 1.0 when datasets have no noisy labels [30]. Consequently, in subsequent experiments, the thresholds are set at t h s = 1.0 and t h l = 0.1.
After filtering out the appropriate local model, the global model of each functional layer L, using a weighted average approach to global aggregation, is shown in Formula (15).
θ k = i = 1 k n i n θ k ( i )

3.3. Federated Learning Process

Based on the theoretical analysis above, the workflow of federated learning for heterogeneous unmanned platforms is outlined in Algorithm 1.
Algorithm 1 Federated Learning Algorithm
InputMaximum number of iterations T
OutputGlobal model parameters θ k
1Initiate  θ k 0
2for  n o d e i = 1  to    N o d e
3  for  k = 1  to  K
4     θ k ( i ) θ k 0
5  end
6end
7for  t = 1  to  T
8  for  n o d e i = 1  to  N o d e
9    for  k = 1   t o   K
10       θ t + 1 i , k θ t i , k α t f θ t
11      Parametric mapping
12    end
13    Calculate the coefficient of similarity and gradient of loss
14    if  S i t h s   a n d   L i t h l  then
15      Agg [ ].append( θ t + 1 i )
16    end
17end
18if  A g g   [   ]   =   n o n e  then
19  Aggregation of all local models
20end
21end
22for  k = 1   t o   K
23   θ k i = 1 k n i n θ k ( i )
24end

4. Experimental Results and Analysis

In this paper, the edge intelligent device is used as a resource-constrained unmanned platform, and a variety of datasets are simulated to verify the effectiveness of the proposed method.

4.1. Dataset

This paper conducts experiments using the open-source FEMNIST, FLAIR, EuroSAT, and RSSCN7 datasets. FEMNIST is derived from the multi-writer EMNIST handwritten character dataset. There are differences in character categories and features across writers. FLAIR consists of around 430,000 images from 51,000 Flickr users. It has varying numbers of images per user and classes per collection. Images of the same class from different users exhibit distribution shifts. Both FEMNIST and FLAIR have inherent Non–IID properties [31]. EuroSAT contains a large number of high-resolution satellite images from various regions across Europe, covering 10 different land cover types such as farmland, forest, grassland, and rivers, with high spectral and spatial resolution. Meanwhile, RSSCN7 primarily focuses on remote sensing image data from China, encompassing seven categories including residential areas, woodland, grassland, water bodies, roads, bare land, and farmland. Examples of the EuroSAT and RSSCN7 datasets are illustrated in Figure 5.
As illustrated in Figure 6 and Figure 7, in the subsequent experiments, the dataset is divided into multiple shards, which are distributed to individual edge platform in a Non-IID manner. Figure 6a and Figure 7a show the initial shard allocation for 10 platforms with Non-IID data. There are substantial disparities in data quantity across different classes on various platforms, reflecting a highly imbalanced and Non-IID data distribution. Figure 6b and Figure 7b depict the shard allocation after data sharing. While the Non-IID characteristics are still partially retained, the data distribution is improved with better and more balanced coverage of classes across platforms. This indicates that the data-sharing mechanism alleviates data distribution bias and class imbalance, providing a more solid foundation for federated learning model training.

4.2. Experimental Setup

4.2.1. Assessment Indicators

To evaluate the effectiveness of the proposed federated edge intelligence method in enhancing the collaborative operations of unmanned platform swarms, three evaluation metrics were employed: precision, macro-averaged F1 score, and gradient divergence among different clients. Specifically, precision and macro-averaged F1 score were utilized to quantify the accuracy of the collaborative operations of the unmanned platform swarm, while gradient divergence was adopted to assess the consistency of model updates across different edge unmanned platforms.
(1)
A c c u r a c y : Accuracy reflects the model’s overall classification performance on all edge unmanned platforms’ data. However, when the dataset has Non-IID characteristics, accuracy may not indicate the model’s performance on minority classes, especially in cases of class imbalance.
A c c u r a c y = T P + T N T P + T N + F P + F N
(2)
F 1 _ s c o r e : In Non-IID datasets, the number of samples for certain classes may be limited. The F1 score is better suited to reflect the model’s performance on minority classes. For a dataset with M classes, after calculating the F1 score for any individual class C (F1_score_C), the macro-averaged F1 score (macro_F1) is computed to evaluate the model’s classification capability across samples of different classes.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 _ s c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
m a c r o _ F 1 = 1 M   C = 1 M F 1 _ s c o r e _ C
(3)
G r a d d i f f : In Non-IID datasets, due to the differing data distributions across various edge platforms, gradient divergence is typically more pronounced. The magnitude of gradient divergence can effectively reflect the efficacy of the data-sharing strategy.
  G r a d d i f f = i = 1 N 1 f i ω t f i + 1 ω t 2
Here, T P , T N , F P , F N denote the counts of true positives, true negatives, false positives, and false negatives, respectively; y i j represents the predicted probability for the i-th sample; and f i ω t signifies the gradient value of the local model for the i-th edge platform.
(4)
Rounds: Communication rounds are a key metric in federated learning that reflect how often participants exchange information during model training. They indicate how frequently client devices communicate with the central server for parameter updates and synchronization. Lower communication rounds mean less communication resource consumption. In this study, the number of communication rounds needed for the federated learning model to converge is used as an evaluation metric. A lower value of this metric indicates that the model consumes fewer communication resources.

4.2.2. Baseline Methods

This paper employs five classical federated learning algorithms—FedAvg [11], FedProx [32], FedCosA [33], FedBN [34], and FedMAE [35]—as baseline methods. FedAvg is a classic federated learning algorithm that involves multiple rounds of iterative training. It trains models in parallel on multiple local clients and uploads local model parameters to the server for weighted averaging to update the global model. FedProx introduces a regularization term into the local objective function to constrain the difference between the local and global models, thereby alleviating the issue of data heterogeneity. FedCosA addresses the slow convergence, overfitting, and high communication costs in federated learning by integrating the Adam optimizer, a cosine annealing learning rate scheduler, and weight decay; it is especially suitable for Non-IID data scenarios. FedBN uses local batch normalization to alleviate the feature transfer prior to averaging model. FedMAE pre-trains a block’s shaded autoencoder (MAE) using large images in a lightweight client device, then cascades multiple pre-trained single-block Maes in a server, and finally generates a single block’s shaded autoencoder (MAE), to build a multi-block VIT backbone network for downstream tasks.
This paper integrates the hierarchical parameter alignment method into all baseline algorithms to handle heterogeneous models. It employs the FedAvg algorithm as the primary baseline method. Subsequently, ablation experiments are conducted to analyze the impact of incorporating data-sharing strategies and similarity-based aggregation strategies into the model. Finally, the proposed method in this paper is compared with the FedAvg, FedProx, FedCosA, FedBN and FedMAE algorithms through comparative experiments to validate the effectiveness of the proposed method.

4.2.3. Training Settings

To verify the effectiveness of the proposed federated edge-smart approach in this paper, as shown in Figure 8, in this study, the NVIDIA Jetson Nano (Hunan Chuanglebo Intelligent Technology Co., Ltd., Changsha, China) is adopted as the edge intelligent device for model deployment. A heterogeneous platform is simulated using three edge intelligent devices equipped with different computing power resources, and their detailed information is presented in Table 1. Programming was conducted using PyCharm 2024, with Python 3.9 as the programming language, pytorch version 1.12.0, and CUDA version 12.6.
In this study, two heterogeneous models, CNN and ResNet, are randomly deployed to each client for training on the EuroSAT and RSSCN7 datasets. The network architectures of CNN and ResNet are shown in Figure 8, respectively. The learning rate ( l r ) for federated learning is set to 0.01. Both the global communication rounds and the local training rounds for each client are set to 500. The number of clients is configured to be 10, the data-sharing ratio is set to 0.2, and the aggregation t h r e s h o l d is set to 1.0. The subsequent ablation experiments and comparative experiments were each run independently 20 times, and the average values of the experimental metrics over these 20 runs are presented in the tables.

4.3. Ablation Experiment

To assess the independent contributions of the Data Sharing Mechanism (DSM) and the Intelligent Model Aggregation Strategy (IMAS) to model performance, this paper designs a systematic ablation study. The experimental results, as presented in Table 2, provide a quantitative analysis of the impact of each module on model performance.
The experimental results demonstrate that the introduction of the data sharing mechanism improved F1 scores by 0.09 on the FEMNIST dataset and 0.17 on the FEAIR dataset, and improved the F1 score by 0.24 on the EuroSAT dataset and by 0.18 on the RSSCN7 dataset. These findings significantly validate the effectiveness of the data sharing strategy in mitigating the Non-IID (non-independent and identically distributed) characteristics of the data. By constructing a shared dataset, the strategy effectively alleviates the data distribution discrepancies among edge platforms.
Furthermore, the incorporation of the intelligent model aggregation strategy reduced the gradient difference by 2.1403 on the FEMNIST dataset, 2.1146 on the FEAIR dataset, 1.0483 on the EuroSAT dataset, and 2.3264 on the RSSCN7 dataset; it also significantly decreased the number of communication rounds required for model convergence. These results demonstrate that the similarity-based model aggregation strategy can effectively mitigate the negative impacts of device heterogeneity. By filtering high-quality local model updates, it reduces the model’s consumption of communication resources and enhances the stability and convergence of the global model.

4.4. Comparative Experiment

Table 3 presents the experimental results of FedAvg, FedProx, FedCosA, FedBN, FedMAE, and the proposed DSIA-FEI algorithm on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets. Compared to FedAvg, FedProx, FedCosA, FedBN, and FedMAE, the DSIA-FEI algorithm achieves higher overall prediction accuracy and F1 scores. It also reduces parameter differences among local models on edge platforms and decreases communication rounds for convergence.
In 20 independent experimental runs, the DSIA-FEI algorithm demonstrated strong stability. On all datasets, its accuracy and F1 score values ranged from 0.85 to 0.95, with variances of 0.34 and 0.15, respectively. The gradient discrepancy values were between 0.2 and 0.5, with variances of 0.08 and 0.13. These results confirm the stability of the DSIA-FEI algorithm.
Figure 9 illustrates the convergence processes of DSIA-FEI, FedAvg, FedProx, FedCosA, FedBN, and FedMAE on the training sets of FEMNIST, FEAIRm, EuroSAT, and RSSCN7. DSIA-FEI not only outperforms the other baseline algorithms in terms of convergence speed and stability but also demonstrates superior overall performance. This is attributed to the intelligent model aggregation strategy introduced in DSIA-FEI, which optimizes the global model update process by reducing unnecessary parameter updates. As a result, DSIA-FEI accelerates overall convergence and enhances the model’s learning capability across the datasets.

5. Conclusions

Aiming at the problems of multi-source heterogeneous data and heterogeneous devices and models in cooperative data processing of unmanned platforms, this paper proposes a cooperative data processing method for unmanned platforms based on federated edge intelligence, in which the remote sensing image dataset is processed by simulating the Non-IID characteristics. The simulation experiments are carried out on the dataset with inherent Non-IID characteristics and the constructed simulated Non-IID characteristics; the effectiveness of the data sharing strategy and the intelligent model aggregation strategy based on similarity measurement and loss gradient are verified and compared with the existing baseline methods. It is verified that the proposed method has high accuracy and stability for data processing which are obviously better than the existing methods. In addition, this method can significantly alleviate the impact of heterogeneous data, devices, and models on data processing, and reduce the consumption of communication resources of resource-constrained devices such as unmanned platforms.

6. Future Perspectives

Heterogeneity in data, devices, and models represents a pivotal challenge for unmanned clusters during collaborative data processing. While the federated edge intelligence approach proposed in this paper can effectively alleviate this issue, further practical and complex applications of unmanned swarms are bound to expose more technological bottlenecks and key research questions. Based on this study, the following directions can be pursued for future research and expansion:
(1)
Privacy Protection Strategies: The data sharing strategy employed in this paper can effectively alleviate data heterogeneity. However, there are still data leakage and privacy risks during data sharing between edge and end-device unmanned platforms. In the future, we need to further explore privacy/utility trade-off mechanisms for data sharing in open environments. For example, we can organically integrate secure enhancing technologies such as differential privacy and homomorphic encryption with the federated learning framework. In addition, constructing verifiable privacy protection paradigms will be crucial.
(2)
Cross-Modal Data Fusion: The federated edge intelligence framework in this paper solves the non-IID property of heterogeneous data, but has not fully considered the semantic correlation of cross-modal data. As unmanned swarms may collect and process multi-modal data in practical scenarios, future work can explore cross-modal federated learning frameworks. By using knowledge distillation and building hierarchical semantic alignment networks, deep feature fusion of multi-modal data like images, sounds, and electric currents can be achieved.
(3)
Hybrid Simulation System Construction: The experimental verification of the federated edge intelligence method proposed in this paper is mainly based on simulation and limited real data, lacking adaptive verification in complex dynamic scenarios. With the rapid development of hybrid reality and digital twin technologies, future research can design hybrid simulation systems with multi-physics coupling. Creating high-fidelity unmanned swarm training environments to simulate complex scenarios like battlefields and urban canyons can validate the algorithm’s robustness in diverse conditions.

Author Contributions

Conceptualization, N.S. and S.L.; methodology, S.L.; validation, S.L.; formal analysis, S.L.; investigation, N.S.; data curation, X.X.; writing—original draft preparation, S.L.; writing—review and editing, S.L.; visualization, S.L.; supervision, X.X.; project administration, N.S. and X.B.; funding acquisition, N.S. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the foundation for the National Natural Science Foundation of China under Grant 62102436 and 62406337, in part by the Natural Science Foundation of Hubei Province under Grant 2021CFB279 and 202250E060, and in part by the National Defence Science and Technology Key Laboratory Foundation Project under grant 614221722050603.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The FEMNIST dataset in this paper is available at https://leaf.cmu.edu (accessed on 6 March 2025). The FEAIR dataset in this paper is available at https://github.com/apple/ml-flair (accessed on 6 March 2025). The EuroSAT dataset in this paper is available at https://github.com/phelber/eurosat (accessed on 6 March 2025). The RSSCN7 dataset in this paper is available at https://aistudio.baidu.com/datasetdetail/52117 (accessed on 6 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Non-IIDNon-Independent and Identically Distributed
DSMData Sharing Mechanism
IMASIntelligent Model Aggregation Strategy
FedAvgFederated Averaging
FedProxFederated Proximal
FedCosAFederated Using Cosine Annealing
FEIFederated Edge Intelligence
DSIA-FEIFederated Edge Intelligence Based on Data Sharing and Intelligent Model Aggregation

References

  1. Li, G.S.; Liu, Y.; Zheng, Q.B.; Yang, G.L.; Liu, K.; Wang, Q.; Diao, X.C. Review of multi-sensor data fusion for UAV. J. Softw. 2025, 36, 1881–1905. [Google Scholar]
  2. Wong, T.W. Research on Multi-Function UAV System for Cooperative Operation. Master’s Thesis, Zhejiang University, Hangzhou, China, 2023. [Google Scholar]
  3. Yan, R.C.; Li, S.; Wang, C.; Wu, Q.; Sun, G.N.; Zhang, S.K.; Xie, G.M. A Collaborative Confrontation Decision-Making Method for Heterogeneous Drone Swarms Based on Self-Play Reinforcement Learning. Sci. China Inf. Sci. 2024, 54, 1709–1729. [Google Scholar]
  4. Shi, Z.G.; Xu, K.P.; Gong, X.; Li, S.N.; Wang, F.F.; Yang, A.; Xiong, Z.K. Research on Autonomous Collaborative Capability Verification of Drone Swarms Based on Meta-Battlefield Domain. J. Ordnance Equip. Eng. 2024, 45, 38–43+49. [Google Scholar]
  5. Xue, S.X.; Ma, Y.J.; Jiang, B.; Li, W.B.; Liu, C.R. A Distributed Task Allocation Algorithm for Heterogeneous Drone Swarms Based on Coalition Formation Game. Sci. China Inf. Sci. 2024, 54, 2657–2673. [Google Scholar]
  6. Liu, H.; Zhang, Y.F.; Zhang, W.B.; Hu, Q.Z. A Collaborative Cruise Method for Heterogeneous Swarms Based on Quantum Decision-Makin. Command. Control Simul. 2024, 46, 66–76. [Google Scholar]
  7. Lu, Y.F.; Wu, T.; Liu, C.S.; Yan, K.; Qu, Y.B. Survey on uav-assisted energy-efficient marginal federated learning. Comput. Sci. 2024, 51, 270–279. [Google Scholar]
  8. Indu, C.; Kizheppatt, V. Decentralized Multi-Hop Data Processing in UAV Networks Using MARL. Veh. Commun. 2024, 50, 100858. [Google Scholar] [CrossRef]
  9. Wang, Q.; Shao, K.; Cai, Z.B.; Che, Y.P.; Chen, H.C.; Xiao, S.F.; Wang, R.L.; Liu, Y.L.; Li, B.G.; Ma, Y.T. Prediction of Sugar Beet Yield and Quality Parameters Using Stacked-LSTM Model with Pre-Harvest UAV Time Series Data and Meteorological Factors. Artif. Intell. Agric. 2025, 15, 252–265. [Google Scholar] [CrossRef]
  10. Min, W.; Muthanna, M.S.A.; Ibrahim, M.; Alkanhel, R.; Muthanna, A.; Laouid, A. Privacy-preserving Federated UAV Data Collection Framework for Autonomous Path Optimization in Maritime Operations. Appl. Soft Comput. 2025, 173, 112906. [Google Scholar] [CrossRef]
  11. Li, X.; Huang, K.; Yang, W.; Wang, S.; Zhang, Z. On the Convergence of FedAvg on Non-IID Data. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  12. Sun, Y.; Wang, Z.; Liu, C.; Wang, Z.; Li, M. Personalized Federated Multi-Task Learning Algorithm Based on Knowledge Distillation. J. Beijing Univ. Posts Telecommun. 2025, 48, 91–96. [Google Scholar]
  13. Zhang, M.Q.; Jia, Y.Y.; Zhang, R.H. Heterogeneous Federated Class-Incremental Learning Assisted by Hybrid Knowledge Distillation in Digital Twins. J. Intell. Syst. 2025, 1–11. Available online: https://kns.cnki.net/kcms2/article/abstract?v=9IId9Ku_yBbYkdgHkFjtw6pDFD3fe70ibGC4vL1OpEE09pXDI0FdraN5sE4V_8FzGIcgtKtPMVJIinpRJ3W60pPVVBIrhGstXNzaj-VPLUxy5rqrBY1qbLmgR49r6Dg2blNQmwvKSJGztEs7tXsQ85XjB_0i35iPXe4d0RnEH2saN6SymZhtwQ==&uniplatform=NZKPT&language=CHS (accessed on 29 July 2025).
  14. Liu, L.; Wu, S.H.; Yu, D.; Ma, Y.; Chen, Y. Federated Learning with Adaptive Encoder Allocation for Heterogeneous Devices. Comput. Eng. Des. 2024, 45, 2569–2576. [Google Scholar]
  15. Yin, H.J.; Zheng, K.Q.; Ke, J.N.; Dong, Y.Q. Federated Learning Method for Non-IID Data with Local Momentum Acceleration. Comput. Eng. 2025, 1–9. [Google Scholar] [CrossRef]
  16. Xiong, Z.J. Research on Federated Learning Algorithm Based on Device Clustering and Differential Privacy. Master’s Thesis, Beijing University of Posts and Telecommunications, Beijing, China, 2024. [Google Scholar]
  17. Rodríguez-Barroso, N.; Jiménez-López, D.; Luzón, M.V.; Herrera, F.; Martínez-Cámara, E. Survey on Federated Learning Threats: Concepts, Taxonomy on Attacks and Defences, Experimental Study and Challenges. Inf. Fusion 2023, 90, 148–173. [Google Scholar] [CrossRef]
  18. Ballhausen, H.; Hinske, L.C. Federated Secure Computing. Informatics 2023, 10, 83. [Google Scholar] [CrossRef]
  19. Karras, A.; Karras, C.; Giotopoulos, K.C.; Tsolis, D.; Oikonomou, K.; Sioutas, S. Federated Edge Intelligence and Edge Caching Mechanisms. Information 2023, 14, 414. [Google Scholar] [CrossRef]
  20. Zhang, Y.T.; Di, B.Y.; Wang, P.F.; Lin, J.L.; Song, L.Y. HetMEC: Heterogeneous Multi-Layer Mobile Edge Computing in the 6 G Era. IEEE Trans. Veh. Technol. 2020, 69, 4388–4400. [Google Scholar] [CrossRef]
  21. Shi, J.F.; Chen, X.Y.; Li, B.L. Research on task offloading and resource allocation algorithm in Cloud Edge collaborative computing for Internet of Thing. J. Electron. Inf. 2024, 47, 458–469. [Google Scholar]
  22. Zhang, H.Y.; Zhang, Z.Y.; Cao, C.M. A federated learning method to solve the problem of data heterogeneity. Comput. Appl. Res. 2024, 41, 713–720. [Google Scholar]
  23. Alladi, T.; Bansal, G.; Chamola, V.; Guizani, M. SecAuthUAV: A Novel Authentication Scheme for UAV-Ground Station and UAV-UAV Communication. IEEE Trans. Veh. Technol. 2020, 69, 15068–15077. [Google Scholar] [CrossRef]
  24. Yu, H.; Fan, J.; Sun, Y.H. Survey of heterogeneous federated learning in unmanned systems. Comput. Appl. Res. 2024, 42, 641. [Google Scholar]
  25. Liu, C. Research on Optimization Method of Joint Resource Utilization in Internet of Things Based on Federated Edge Intelligence. Master’s Thesis, The Huazhong University of Science and Technology, Wuhan, China, 2022. [Google Scholar]
  26. Ma, C.W.; Xu, X.; Chang, W.W. Research progress on cooperative control of unmanned ground platform swarms. Unmanned Syst. Technol. 2022, 5, 1–11. [Google Scholar]
  27. Liu, J.; Xu, Y.; Xu, H.; Liao, Y.; Wang, Z.; Huang, H. Enhancing Federated Learning with Intelligent Model Migration in Heterogeneous Edge Computing. In Proceedings of the IEEE International Conference on Data Engineering, Kuala Lumpur, Malaysia, 9–12 May 2022. [Google Scholar]
  28. Herzog, R.; Köhne, F.; Kreis, L.; Schiela, A. Frobenius-Type Norms and Inner Products of Matrices and Linear Maps with Applications to Neural Network Training. arXiv 2023, arXiv:2311.15419. [Google Scholar] [CrossRef]
  29. Huang, Y.; Liao, G.; Xiang, Y.; Zhang, L.; Li, J.; Nehorai, A. Low-Rank Approximation Via Generalized Reweighted Iterative Nuclear and Frobenius Norms. IEEE Trans. Image Process. 2019, 29, 2244–2257. [Google Scholar] [CrossRef]
  30. Liu, Y.; Wang, T.; Peng, S.L.; Wang, G.J.; Jia, W. Edge-based federated learning model cleaning and equipment clustering method. Acta Comput. Sin. 2021, 44, 2515–2528. [Google Scholar]
  31. Congzheng, S.; Filip, G.; Kunal, T. FLAIR: Federated Learning Annotated Image Repository. Adv. Neural Inf. Process. Syst. 2022, 35, 37792–37805. [Google Scholar]
  32. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
  33. Binu, J.; Varun, G.; Mainak, A. FedCosA: Optimized Federated Learning Model Using Cosine Annealing. In 2025 Emerging Technologies for Intelligent Systems (ETIS); IEEE: New York, NY, USA, 2025; pp. 1–6. [Google Scholar]
  34. Li, X.; Jiang, M.; Zhang, X.; Kamp, M.; Dou, Q. FedBN: Federated Learning on Non-IID Features Via Local Batch Normalization. In Proceedings of the International Conference on Learning Representations, Virtual Event, Austria, 3–7 May 2021. [Google Scholar]
  35. Yang, N.; Chen, X.; Liu, C.Z.; Yuan, D.; Bao, W.; Cui, L. FedMAE: Federated Self-Supervised Learning with One-Block Masked Auto-Encoder. arXiv 2023, arXiv:2303.11339. [Google Scholar]
Figure 1. Schematic of cloud-edge-end architecture.
Figure 1. Schematic of cloud-edge-end architecture.
Sensors 25 04752 g001
Figure 2. Challenges faced by heterogeneous unmanned platform swarms.
Figure 2. Challenges faced by heterogeneous unmanned platform swarms.
Sensors 25 04752 g002
Figure 3. Swarm architecture of unmanned platform.
Figure 3. Swarm architecture of unmanned platform.
Sensors 25 04752 g003
Figure 4. Schematic diagram of federated edge intelligent methods.
Figure 4. Schematic diagram of federated edge intelligent methods.
Sensors 25 04752 g004
Figure 5. Sample graph of EuroSAT and RSSCN7 dataset. (a) EuroSAT; (b) RSSCN7.
Figure 5. Sample graph of EuroSAT and RSSCN7 dataset. (a) EuroSAT; (b) RSSCN7.
Sensors 25 04752 g005
Figure 6. EuroSAT data distribution of each platform before and after data sharing strategy. (a) EuroSAT-Non-IID; (b) EuroSAT-Non-IID-shared.
Figure 6. EuroSAT data distribution of each platform before and after data sharing strategy. (a) EuroSAT-Non-IID; (b) EuroSAT-Non-IID-shared.
Sensors 25 04752 g006aSensors 25 04752 g006b
Figure 7. RSSCN7 data distribution of each platform before and after data sharing strategy. (a) RSSCN7-Non-IID; (b) RSSCN7-Non-IID-shared.
Figure 7. RSSCN7 data distribution of each platform before and after data sharing strategy. (a) RSSCN7-Non-IID; (b) RSSCN7-Non-IID-shared.
Sensors 25 04752 g007
Figure 8. CNN and ResNet network architecture diagram. (a) CNN network; (b) ResNet network.
Figure 8. CNN and ResNet network architecture diagram. (a) CNN network; (b) ResNet network.
Sensors 25 04752 g008
Figure 9. Convergent curves on all training sets. (a) FEMNIST training set; (b) FEAIR training set; (c) EuroSAT training set; (d) RSSCN7 training set.
Figure 9. Convergent curves on all training sets. (a) FEMNIST training set; (b) FEAIR training set; (c) EuroSAT training set; (d) RSSCN7 training set.
Sensors 25 04752 g009aSensors 25 04752 g009b
Table 1. Information on different types of NVIDIA Jetson Nano edge-smart devices.
Table 1. Information on different types of NVIDIA Jetson Nano edge-smart devices.
CPUGPUMemory
Jetson Nano Developer KitQuad-core Arm Cortex-A57 MPCore128-core NVIDIA Maxwell architecture GPU4 GB 64-bit LPDDR4
Jetson Orin Nano 4 GB6-core Arm Cortex-A78AE v8.2512-core NVIDIA Ampere architecture GPU4 GB 64-bit LPDDR5
Jetson Orin Nano 8 GB6-core Arm Cortex-A78AE v8.21024-core NVIDIA Ampere architecture GPU8 GB 128-bit LPDDR5
Table 2. Ablation study on the datasets.
Table 2. Ablation study on the datasets.
DatasetMethods A c c u r a c y F 1 _ s c o r e G r a d d i f f Rounds
FEMNISTFedAvg0.74890.732.6874144
w/o DSM0.81420.821.3658112
w/o IMAS0.77740.790.547171
FLAIRFedAvg0.76320.692.3987150
w/o DSM0.83460.861.247174
w/o IMAS0.79360.810.284160
EuroSATFedAvg0.77020.612.0697102
w/o DSM0.82070.851.369896
w/o IMAS0.83900.831.021482
RSSCN7FedAvg0.74950.702.6471220
w/o DSM0.81090.881.2684183
w/o IMAS0.79430.840.3207150
Table 3. Comparative experiment on the datasets.
Table 3. Comparative experiment on the datasets.
DatasetMethods A c c u r a c y F 1 _ s c o r e G r a d d i f f Rounds
FEMNISTFedAvg0.74890.732.6874144
FedProx0.80170.792.3014120
FedCosA0.81240.790.654195
FedBN0.82360.782.0674114
FedMAE0.81470.801.9874126
DSIA-FEI0.90870.900.178948
FLAIRFedAvg0.76320.692.3987150
FedProx0.83360.821.3654130
FedCosA0.83240.792.034789
FedBN0.83210.802.987490
FedMAE0.79610.812.4789103
DSIA-FEI0.91300.880.365451
EuroSATFedAvg0.77020.612.0697152
FedProx0.78040.782.0314141
FedCosA0.82650.821.3654115
FedBN0.78130.802.3654132
FedMAE0.81260.812.2745145
DSIA-FEI0.87920.860.412780
RSSCN7FedAvg0.74950.702.6471220
FedProx0.79800.881.6874183
FedCosA0.81470.851.8415151
FedBN0.78690.803.2471177
FedMAE0.80620.862.6815187
DSIA-FEI0.86560.900.2314122
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, S.; Shan, N.; Bao, X.; Xu, X. Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence. Sensors 2025, 25, 4752. https://doi.org/10.3390/s25154752

AMA Style

Liu S, Shan N, Bao X, Xu X. Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence. Sensors. 2025; 25(15):4752. https://doi.org/10.3390/s25154752

Chicago/Turabian Style

Liu, Siyang, Nanliang Shan, Xianqiang Bao, and Xinghua Xu. 2025. "Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence" Sensors 25, no. 15: 4752. https://doi.org/10.3390/s25154752

APA Style

Liu, S., Shan, N., Bao, X., & Xu, X. (2025). Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence. Sensors, 25(15), 4752. https://doi.org/10.3390/s25154752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop