Next Article in Journal
Dual Convolutional Malware Network (DCMN): An Image-Based Malware Classification Using Dual Convolutional Neural Networks
Previous Article in Journal
Fundus Image Generation and Classification of Diabetic Retinopathy Based on Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Data Quality Governance for Federated Cooperation Scenarios

1
Faculty of Management and Economics, Kunming University of Science and Technology, Kunming 650504, China
2
Marxist College, Xiamen Institute of Technology, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(18), 3606; https://doi.org/10.3390/electronics13183606
Submission received: 13 August 2024 / Revised: 9 September 2024 / Accepted: 9 September 2024 / Published: 11 September 2024

Abstract

:
Exploring the data quality problems in the context of federated cooperation and adopting corresponding governance countermeasures can facilitate the smooth progress of federated cooperation and obtain high-performance models. However, previous studies have rarely focused on quality issues in federated cooperation. To this end, this paper analyzes the quality problems in the federated cooperation scenario and innovatively proposes a “Two-stage” data quality governance framework for the federated collaboration scenarios. The first stage is mainly local data quality assessment and optimization, and the evaluation is performed by constructing a metrics scoring formula, and corresponding optimization measures are taken at the same time. In the second stage, the outlier processing mechanism is introduced, and the Data Quality Federated Averaging (Abbreviation DQ-FedAvg) aggregation method for model quality problems is proposed, so as to train high-quality global models and their own excellent local models. Finally, experiments are conducted in real datasets to compare the model performance changes before and after quality governance, and to validate the advantages of the data quality governance framework in a federated learning scenario, so that it can be widely applied to various domains. The governance framework is used to check and govern the quality problems in the federated learning process, and the accuracy of the model is improved.

1. Introduction

In the era of big data, data privacy and security have emerged as central issues across diverse sectors. Federated learning, a distributed machine learning technique, has garnered extensive interest and use due to its local data retention approach that safeguards data privacy. In the federated learning scenario, the data is provided by the cooperative participants, but the data remains local to ensure that the data is not leaked. Nevertheless, the varying types and qualities of data among participants in federated training can compromise the effectiveness of the joint training process [1]. Low-quality data and unclear dependencies between data and algorithms can easily bias or distort AI algorithm results [2], posing significant challenges to data cooperation and utilization. Due to the differences in the structure, quantity, and distribution of data sources for federated training, effective quality control of such multi-source heterogeneous data is essential in data governance. However, privacy sensitivity, data invisibility, and profit-seeking among participants present serious challenges to federated collaborative training in data quality governance. The presence of low-quality data from participants can be misleading and produce biased results and decisions on federated cooperation, which requires data quality governance to protect against all types of potential risks to data.
In the federated learning scenario, where data originates from participants who retain their data locally, quality issues primarily arise during the “data processing” and “federated training” stages. The first challenge is who is responsible when the data provided by a participant is anomalous or incorrectly output before the training unfolds? The problem is unclear, and without proper data quality governance, it may be difficult to establish a causal relationship between events and failures; The second challenge occurs during federated training: due to variations in training data and algorithms, participants may produce local models of differing quality, which can result in poor global model aggregation, ultimately affecting model performance and hindering effective decision-making.
To solve the above challenges, this paper proposes a “Two-stage” quality governance process framework by citing smart contracts in blockchain technology. Stage 1: Local Side, in which participants use smart contracts to automatically evaluate and optimize the quality of local datasets, including data integrity, accuracy, and other indicators. Smart contracts ensure that the evaluation process is transparent and fair, storing evaluation results and optimization results. Stage 2: the participants use smart contracts to automatically evaluate the quality of local models and detect abnormal parameter values, with evaluation metrics including MSE, R2, and so on. The smart contract stores the evaluation results and automatically triggers the governance mechanism to optimize the model parameters.
The goals of data quality governance in this study are twofold. One is to ensure that the data quality of all participants meets the required standards before federated training, guaranteeing proper data usage, and compliance, and facilitating value co-creation through data utilization. The second is to optimize the model performance and enhance the client experience, thereby eliminating the negative impact of low-quality models on the global model, improving the accuracy of both the overall model and local models, and ultimately achieving the healthy development of federated learning.
The main contributions of this paper are as follows:
  • First, this paper proposes an innovative “Two-stage” quality governance framework, which can be widely applied to real-world federated learning cooperation scenarios.
  • Second, The DQ-FedAvg method is proposed, incorporating outlier processing and model quality assessment, resulting in a new federated aggregation approach.
  • Third, we conducted experiments on the proposed framework to verify its feasibility. Our results show that in the process of federated cooperation, it is necessary to check the data quality of each participant, and monitor the quality of the local models and local parameters provided by each participant in the training process, so as to promote the effective implementation of federated training.
The remainder of the paper is organized as follows: Section 2 describes the work on data quality governance and federated learning. Then, in Section 3, we provide a detailed analysis of the quality issues in the federated cooperation scenario. Section 4 introduces the main part of the paper, the “Two-Stage Quality Governance Framework”, and introduces each part in detail in stages. Experiments and analysis of results are described in Section 5. This article will be summarized in Section 6.

2. Related Research

2.1. Federated Scenario

Federated learning, a pioneering privacy-preserving technology in AI, was first proposed by Google in 2016 to address Android system updates. Google suggested deploying neural network training on users’ devices, uploading only model parameters instead of user data, thereby safeguarding personal privacy. Federated learning technology enables the development and utilization of data on a “usable but not visible” basis, which is conducive to avoiding the risks faced by the original data in the process of flow and protecting private information and data security. The technology is now widely used in several areas. For example, in the field of health care, multiple medical institutions jointly train disease prediction models [3], or in the financial field, different financial institutions jointly train economic prediction models. Zhang et al. [4] proposed a federated learning application framework based on “element-process”, constructed a “4 + 4” technology application model, and promoted learning from focusing on technology and theoretical research to practical application. T.A. et al. [5] analyzed the Indian market and used data from multiple sources to train a global model for EV sales forecasting using a federated averaging algorithm. Jia et al. (2024) [6] proposed a multi-task federated learning framework for industrial applications, enabling multiple requesters to run different tasks simultaneously while ensuring user privacy and high global model accuracy. Liu et al. (2024) [7] introduced a robust federated learning method based on geometric and quality indicators, aiming to solve the problem of poisoning attacks caused by malicious clients in federated learning. Yang et al. (2024) [8] proposed a novel framework that combines federated learning with blockchain, using quality-based reputation evaluation, reputation-based consensus methods, and contribution and reputation-based incentive mechanisms.
Based on the above overview of federated learning research, this paper defines a federated scenario as one in which multiple independent entities—such as organizations, devices, or institutions—collaborate to achieve a common goal by training a global model without sharing raw data. These entities generate processed intermediate results (e.g., model parameters or partial models) and transmit them to a central coordinator, such as a government organization, for integration.
As federated learning continues to evolve across diverse application scenarios, future research will expand in various directions. For example, this paper [9] proposes a framework called FedNor, which effectively improves the resilience and accuracy of federated learning in the face of data quality problems and Byzantine attacks. It demonstrates the cutting-edge research progress in data quality protection and improvement. The article [10] suggests that future studies could explore powerful and explainable anomaly detection and reputation-scoring mechanisms to identify and filter out malicious clients, addressing the challenges of heterogeneous client data quality. The paper [11] highlights system robustness as an underexplored area in federated learning, calling for measures such as outlier detection to achieve robust FL. In light of this, this paper discusses data quality governance under federated learning to achieve robust FL.

2.2. Data Quality Governance

(1) Connotation
The connotation of data quality governance aims to ensure that data meets specific purpose needs and achieves goals through effective quality governance. Data quality refers to the extent to which data is used to meet the needs of a specific purpose [12]. A good data quality governance mechanism can not only improve the consistency and integrity of data but also improve the model’s accuracy and promote the stable operation of the entire data ecosystem.
In federated learning scenarios, data quality governance is crucial. Since federated learning relies on multiple participants to share and co-train the model, the data quality of all parties will directly affect the performance of the global model. Therefore, establishing a robust data quality governance approach and ensuring that all parties provide high-quality data for training is key to the success of federated cooperation.
(2) Data Quality Governance
In the field of machine learning, data quality is the key to the success of the model, high-quality data will improve the accuracy and reliability of the algorithm, and low data quality will lead to poor model performance. Liu et al. [13] systematically proposed a data quality governance framework for materials research, addressing data quality and quantity issues to enhance the application of machine learning in materials development. Article [14] describes a governance architecture for adaptive systems in edge cloud environments that can improve the performance of machine learning models by ensuring data quality through real-time monitoring and feedback mechanisms.
With the development of machine learning technology, federated learning, an emerging distributed learning method, allows multiple participants to work together to train models while maintaining the locality and privacy of their respective data. This new paradigm shift brings new challenges and opportunities for data quality governance. Whereas current research on data quality governance under federated learning is less explored, on applying data quality governance in federated learning, the paper [15] proposes applying data governance to federated learning, suggesting it could enhance model development and improve model quality, though it lacks specific implementation details. The article [16] proposes that when applying DG to artificial intelligence (AI), the raw data is not the only thing that needs to be controlled, but also the quality of the generated ML model. Navaz et al. [17] proposed to use a federated incentive framework to evaluate client data quality in the face of edge computing scenarios, and to improve the performance of machine learning models by improving data quality. Chen et al. [18] proposed FeLeDetect, a federated learning-based method for cross-source data error detection, designed to enhance data quality and improve the accuracy and efficiency of data governance through multi-source data collaboration. This paper [19] introduces the Federated Data Quality Assessment (FedDQA) method, which enhances model performance and assessment accuracy by evaluating data quality at the instance, feature, and participant levels. In this paper [20], a data quality evaluation mechanism based on the Web of Objects (WoO) platform is developed for healthcare applications, which helps to improve the quality of the global model and provide more efficient applications. Bejenar et al. [21] proposed an aggregation method based on quality model evaluation, FedAcc and FedAccSize, which can effectively improve the robustness and overall performance of federated learning models. Table 1 summarizes the contributions of this paper compared to existing studies.
The above article has the following shortcomings: First, there are few dimensions of data quality evaluation at the local end, and second, the previous studies have paid less attention to the model quality of the federated process of the participants. Third, fewer studies have been conducted to develop specific practices. Combined with the data quality problems faced by local training and the model quality problems during federated training, this paper proposes a “Two-stage” quality governance framework, the first stage adopts the local client quality evaluation, analyzes and screens the quality indicators that will affect the federated training according to the application scenarios, and adopts the quantitative data quality evaluation method for each dimension index, and adopts the data quality optimization method for governance, so as to obtain high-quality datasets of each participant, which lays the foundation for the subsequent federated cooperative training process. In the second stage, the abnormal local model parameter values were identified, and the federated aggregation algorithm DQ-FedAvg was designed and improved based on the model quality assessment, so as to control the impact of the low quality of local model parameters and obtain high-quality global models and high-performance local models.

2.3. Blockchain Technology

Blockchain technology is a decentralized, distributed ledger system that ensures data integrity and security through immutable blockchain storage, making it highly trustworthy. This technology leverages smart contracts to manage data-sharing rules, enhancing safety and reliability while preventing data misuse and improving collaboration efficiency among stakeholders. Additionally, smart contracts embedded in the blockchain automatically establish trusted agreements between requesters and participants, driven by predefined code. This provides a secure, efficient foundation for various use cases, enabling process automation and fostering trust between parties. The blockchain-assisted Distributed Signature (IBS) scheme proposed in article [22] enhances the security and legitimacy of NFT (non-fungible token) transactions, reduces the possibility of illegal activities, and ensures that users’ digital asset ownership is not attacked. The paper [23] integrates blockchain and smart contracts to create a decentralized ecosystem for drone authentication, transforming the trusted party’s execution logic into automated smart contracts (SCs) to enhance the security and decentralization of UAV systems. This paper [24] proposes a blockchain-based mobile crowdsourcing framework (BSIF), using smart contracts to automate task allocation and data validation. Smart contracts handle task publishing, assignment, and data quality evaluation, ensuring participant privacy and data security throughout the crowdsourcing process. By exploring the above-mentioned applications of blockchain and smart contracts, this paper explores introducing smart contracts into data quality governance within federated learning. This approach enables decentralization, automated evaluation, and optimization, ensuring transparency in data quality governance while preventing data tampering and unfair assessments.

3. Problem Analysis

3.1. Stage 1 Problem Analysis

In the federated training task, the high or low quality of local data of the participants has a significant impact on the global model training. According to literature analysis, common data quality problems include missing values, outliers, duplicate values and other indicators, which arise from the data quality indicators such as completeness, accuracy, repeatability, consistency, timeliness and other related data quality indicators of datasets, and the above data quality problems also appear in the local dataset of the participant, so the dataset of the participant’s model training must also guarantee the data’s completeness, consistency, reliability, relevance, timeliness, reliability, etc. In this paper, the data quality problems of each participant dataset that affect federated learning training are divided into two-dimensional indicators, of which the one-dimensional indicators are intrinsic quality (the quality characteristics inherent to each participant’s dataset) and contextual quality (considering the applicability of participant data in specific tasks or scenarios).
Through literature analysis and the determination of quality dimension indicators according to requirements and application scenarios, we have identified the quality dimension indicators that influence participants in federated training, along with the reasons for their selection, it’s shown in Table 2.
To address these quality issues, participants employ quality assessment methods to evaluate their local datasets and implement optimization measures to improve datasets that do not meet the quality standards for specific indicators.

3.2. Stage 2 Problem Analysis

During federated learning training, participants upload parameters of their locally trained models to the server, and the server trains the global model through the federated aggregation algorithm, which is a cycle of federated training. Among them, if the quality of local model parameters uploaded by all parties is poor, it can degrade the quality of the final trained global model, impacting its generalization ability. To address unstable model performance across different datasets, this paper introduces multi-dimensional indicators to evaluate the quality of the model. Throughout the training process, the local and global ML models can be tested according to the mutually agreed ML quality requirements to ensure the consistency of the model on different datasets. Second, due to the existing data quality problems and the quality problems arising in the federated training process, the model output will be biased. To mitigate this issue, this paper proposes a model adjustment strategy. This strategy adjusts weights based on evaluated model quality results to prevent overfitting or underfitting problems.

4. “Two-Stage” Quality Governance Framework

4.1. General Framework

The overall framework diagram is shown in Figure 1, it is divided into three modules after the participants reach the four principles of goal agreement, value co-creation, risk sharing, and unified treatment, and the main steps of each module are as follows:
Locally trained data quality governance module: In the initial stage of federated learning, each participant needs to process their data locally. This module is responsible for evaluating the integrity, consistency, and accuracy of the local data of the participants, and then automatically performs necessary governance operations such as data cleaning, denoising, and completion based on the problems.
Data quality governance module for federated training: During federated learning training, the local model trained by different participants will be directly transferred to the performance of the global model. This module introduces the outlier handling mechanism of local model parameters, and regularly evaluates and corrects the model quality problems, to ensure the quality of the local model and the global model of each participant.
Smart Contract Module: This module is primarily used to automate protocols and tasks related to data quality governance. Through smart contracts, it ensures that each participant follows the federated learning protocol, automatically records the quality assessment of each participant, and triggers corresponding data governance measures.
1. Formulas
  • Goal consistency
The premise of federated cooperation is that participants must share common goals, not only to train a machine learning model but also to obtain a model that is suitable for the needs of the different participants. Before initiating the federated machine learning process, federated partners must agree on what they want to achieve.
  • Value co-creation
In the federated scenario, participants join the training to achieve a goal, and if the participants do not have satisfactory results, they will not easily participate in the cooperation. The value co-creation theory emphasizes that value creation is the result of the joint integration and development of data resources by all participants in the value network, and the cooperation between subjects is the internal action logic of value co-creation. Therefore, the second principle of governance is that multi-party collaboration is participating in governance in order to achieve value co-creation.
  • Risk sharing
The principle of risk sharing requires all parties to jointly take responsibility for data authenticity, security, and model training. This helps to prevent legal, moral, and economic risks while maximizing the benefits of collaboration.
  • Unified processing
As a server, the government organization needs to draft a smart contract to stipulate that all parties involved in the training adopt unified processing rules and upload the processing results to the server so that the data can be kept locally to protect data privacy. The uploaded results can be traced back to the source when there is a problem in the cooperation. Therefore, the fourth principle of participating in quality governance in the federated scenario is unified processing to ensure the consistency and credibility of data quality and improve the efficiency of data cooperation.

4.2. Smart Contract

As an important application of blockchain technology, smart contract technology is a computer program that automatically executes contracts. This automated execution makes the performance of contracts more efficient and reduces human error and disputes. Smart contracts promote the development of a federated learning process in the direction of intelligence, automation and security, and the introduction of smart contract technology in the process of federated learning-oriented data quality governance can save the calculation results in the quality governance process and automatically execute the optimization process, escorting the federated learning process. The steps to achieve automatic execution through smart contracts are as follows: (1) Contract design. According to the quality governance objectives in the federated learning process, a smart contract agreement is signed between the participants, which involves the participants of federated learning, the training process, etc. (2) Write smart contracts. According to the business process of quality governance, the functions of quality inspection and quality optimization are defined, the quality evaluation results of the federated local and training processes are automatically calculated and stored, and then directly optimized and archived according to the quality optimization functions. (3) Deploy smart contracts. After writing a smart contract, deploy it on the blockchain. (4) Automatic execution. Smart contracts automatically execute parts of the contract based on pre-set conditions.

4.3. Stage 1: Local Data Quality Governance

Each client’s data can contribute to data intelligence, but its effectiveness depends on whether the local data meets specific quality standards. Prior to collaboration, participants perform data quality governance according to the unified rules established by smart contracts. This process involves two main steps: first, a quality assessment, followed by the optimization of datasets that do not meet the required standards. While participants in federated learning cannot share their raw data, they can still exchange feature data and other quality metrics. Therefore, in the first phase, each participant assessed and optimized its local data quality based on predefined quality governance rules. Data quality indicators definitions, sources and measurements are shown in Table 3:
1. Indicator Dimension
Table 3. Evaluation system of data quality indicators.
Table 3. Evaluation system of data quality indicators.
One-DimensionalTwo-DimensionalDescription of the MetricMeasureIndicator Sources
Intrinsic qualityThe amount of dataThe total number of datasets (how many pieces of data are in the dataset provided by the participantsThe sum of the data entries in the dataset[25,26]
Data IntegrityWhether there are null values in the dataset measures whether the data is completeQuantitative assessment[25,26,27,28,29,30,31,32]
Data duplicationWhether there are duplicate data entries in the dataset provided by the participantQuantitative assessment[29,30,31]
Data accuracyA measure of the accuracy of the data, such as how accurate it is to the actual situationQuantitative assessment[25,26,27,28,30,31,32]
Data consistencyMeasure the consistency of data, including consistency in data fields and formats, as well as logical consistency between dataQuantitative assessment[25,26,27,28,29,30,31,32]
Degree of structureMeasure how well your data is organized and formatted when stored locallyQuantitative assessment[28]
Contextual qualityTimelinessHow often and time delays the data is updatedQuantitative assessment[25,26,28,29,30,31,32]
2. Determination of assessment methods
  • Completeness score S i . Each participant calculates the completeness of the dataset in which it participates in training, counting the number of missing values in the sample data, and calculates the ratio of the null value data to the total number of entries. The formula is as follows:
    S i = 1 D n D t
    where D n refers to the number of null values in sample data D, and D t refers to the total number of null values in sample data set D. Assuming that Participant A has a total number of D t = 2000 total samples, in which there is a total null value, D n = 300 , then the integrity score of Participant A is 1 300 2000 = 0.85 , and Participant B has a total of D t = 2000 total samples, in which there is a total null value, D n = 500 , and the integrity score of Participant B is 1 500 2000 = 0.75 . The higher the integrity score, the more complete and easier the dataset is to process.
  • Repeatability score S r . The number of duplicate samples of each feature column in the participant statistical sample dataset is added and denoted as D r , and the ratio of the number of duplicate samples of each feature column to the total number of samples is calculated by the following formula:
    S r = 1 D r D t
    where D r is the total number of duplicate samples in sample data D, and D t is the total number of sample data set D. The higher the repeatability score of Participant A, the fewer duplicates appeared in the sample.
  • Accuracy score S a . Accuracy is used to measure the degree to which each feature dimension of the data conforms to the actual situation, in this paper, the accuracy degradation caused by not conforming to the normal distribution and pattern of the data set is considered as an outlier analysis, so the outliers of each dimension feature are counted. For continuous data, we use the interquartile range (IQR) method (a non-parametric method) to identify outliers. For categorical data, if the data is encoded, outliers are defined as values that fall outside the range of encoded values (exceeding upper and lower limits or with undefined encoding). Next, the proportion of outlier samples in each feature dimension to the total sample size was calculated, and the data accuracy was evaluated by this. The formula is as follows:
    S a = 1 D o D t
    where D o indicates the number of abnormal samples in sample data D, and D t refers to the total number of sample data set D. The higher the accuracy score, the higher the quality of the dataset data provided by the participants.
  • Consistency score S c . The data consistency index is evaluated by checking the consistency of data values and data formats, where the consistency of data values is to check whether the value range of the same data field in different datasets or sources is consistent, and data format consistency refers to whether the format and unit of the data field are consistent. Calculate the number of data entries in feature columns that do not meet the consistency of data values and data formats, and their proportion to the total number of data entries in feature columns. The consistency scoring formula is expressed as follows:
    S c = 1 F c D t
    where F c refers to the number of data entries in the feature column that does not meet the consistency of data values and data formats, and D t refers to the total number of sample dataset D. The higher the consistency score, the easier it is for the data to be used for model training.
  • Degree of structuring score S s . Participants evaluate the degree of structuring for their respective datasets used in federated training. Structured data refers to data stored in databases that can be logically expressed in a two-dimensional table structure. In this paper, the degree of structuring is assessed by calculating the total amount of data in formats other than “int64” and “float64”. The structured degree scoring formula is expressed as follows:
    S s = 1 D s D t
    where D s refers to the data samples in unstructured format, and D t refers to the total number of samples in the sample dataset D. The higher the structuring score, the easier the data is to process, indicating the higher the quality of the dataset used for training.
  • Timeliness score S t . The timeliness of the current dataset is known according to the time difference of the dataset or the degree of time covered by the dataset, and the newer the data, the more the generalization ability of the model can be enhanced.
3. Optimization Method
Quality optimization is one of the key aspects of governance, and in order to improve and optimize the problems found in the above data quality evaluation, the local data quality assessment optimization method is mainly to optimize the inherent quality indicators of the dataset, and adopt a one-to-one corresponding optimization method for its indicators. Among them are the completeness-missing value supplementation method [33], Repeatability—duplicate sample deduplication method [34], Accuracy—Outlier Removal Methods [35], Consistency—data standardization and normalization, etc.

4.4. Stage 2: Federated Training Phase

After receiving the tasks issued by the federated organizers, the federated participants conduct local data processing and training, and obtain their respective training models M = m 1 , m 2 , , m n , upload and submit local model parameters Ω = ω 1 , ω 2 , , ω n . There are j indicators to evaluate the quality of the model A = a 1 , a 2 , , a j , G = g i j indicates the evaluation value of the client-trained model m i on the metric a j . Since the evaluation of model quality is closely related to the evaluation results of the test set, in this paper, the assessment indicators chosen were Mean Square Error (MSE) ( a 1 ) and Coefficient of Determination R2 ( a 2 ) [36]. After the model training of the r -round participant i , the MSE obtained by evaluating the test set is g i 1 r , R2 is g i 2 r . We denote the quality evaluation value of the r -round model is M q r , and the formula is as follows:
M q r = g i 1 r i = 1 N g i 1 r + g i 2 r i = 1 N g i 2 r
The evaluated values of this metric are recorded for each round and compared with the results of the previous round of testing to identify any anomalies in the model performance in a timely manner. Detecting poorly performing models in the federated collaborative environment through the above metrics in order to monitor model quality on a regular basis contributes to the quality governance, which in the “Stage 2” is mainly done using the DQ-FedAvg aggregation algorithm. The core concept of DQ-FedAvg is that the server aggregates global model parameters by applying a weighted average after performing quality management on the model parameters. The improved global parameters are then sent back to all participants, with this process iterating until convergence. The key difference between DQ-FedAvg and the traditional FedAvg method is that DQ-FedAvg incorporates an outlier processing mechanism and a model quality evaluation function to adjust and optimize local model parameters before performing the weighted average aggregation.
Firstly, each participant performs outlier detection and processing before uploading local model parameters and prevents outliers and low-quality parameters from affecting the training and performance of the global model by fixing outliers. The process of detecting and handling outliers is as follows:
The r-round local model parameters ω i r , obtained by local training, calculate the Z-score [37] values for each local parameter as follows:
z k = ω i , k r μ k σ k
Parameters exceeding the threshold Z o are labeled as outliers and the outliers are repaired using the mean value method:
ω i , k r = μ k i f z k > Z o
Then, after detection and repair, the participant uploads the local parameters to the server and trains the global model through the improved federated aggregation Algorithm 1 DQ-FedAvg. The DQ-FedAvg process is as follows:
Step 1: Initialize the global model parameters Ω 0 .
Step 2: Each client i is trained locally in the r -th round to obtain the local model parameters ω i r and the model quality evaluation index g i r .
ω i r , g i r = c l i e n t i . t r a i n ( Ω r 1 )
Step 3: Each client evaluates its current round M q r and the previous round of quality evaluation indicators M q r 1 , and calculates the change M q r = M q r M q r 1 .
Step 4: If g i r < 0 , it is considered that the quality of the global model in this round has declined, adjust the weight or eliminate the model, and the adjusted weight ω i r ¯ . The calculation is as follows:
ω i r ¯ = ω i r , i f   g i r 0 ω i r α , i f   g i r < 0
The α is the adjustment factor (usually 0 < α < 1).
Step 5: Update the global model. Using the adjusted weights ω i r ¯ . Perform a weighted average calculation and update the global model parameters Ω r .
Ω r = 1 N i = 1 N ω i r ¯
Algorithm 1. Aggregation method: DQ-FedAvg
1: Server steps:
2: initialize the global model with parameters ω 0
3: for r = 1 , , R do:
4:   for i   [ 1 , N ] do:
5:     The client i trains the model locally and obtains the model parameters
6:    The client i updates the local model parameters locally using the outlier handling mechanism
7:   Calculate the quality of the client’s local model, M q r = g i 1 r M S E + g i 2 r ( R 2 )
8:    Comparing the quality of the local model in the previous round, g i r = M q r M q r 1
9:   Determine whether g i r is greater than 0 in ∆, and adjust the model parameters according to Equation (10).
10: Aggregate model parameters that have been updated after quality governance into the global model

5. Experiments

5.1. Experimental Data

To verify the impact of the “Two-stage” data quality governance framework on the federated cooperation scenario, the federated cooperation scenario discussed in this paper is the collaborative construction of pricing models between platforms, and the platforms, as the participants of federated learning, construct price prediction models based on the platforms’ historical transaction datasets. Therefore, we collected two datasets with historical transaction information: (1) A total of 5819 data were collected from the data service module of the data service module of Guoxin Youyi big data trading platform (Guoxin Youyi big data trading platform: (youedata.com), accessed on 5 September 2024). (2) A total of 797 data items in the dataset of the Jingdong Wanxiang big data trading platform (Jingdong Wanxiang big data trading platform: https://wx.jdcloud.com, accessed on 5 September 2024). The experiments in this section are based on the assumption that all participants are honest [38].

5.2. Details of the Experiment

1. Data Preprocessing
To make the dataset more applicable to the federated learning model, we preprocessed the dataset as follows: (1) deleted the feature dimensions in the participant dataset that were not used as model features; (2) one-hot coding, which implements one-hot coding for the classification feature columns in the dataset; (3) adjust the feature columns in different formats to a unified standard; (4) Box-Cox distribution is performed on the target variable columns of the two datasets to make them obey the normal distribution. (5) To unify the calculation caliber, the dataset is standardized and converted to StandardScaler(). In addition, to illustrate the rationality of federated learning training, we adopt a feature filtering strategy, which aims to ensure that the participants have the same input dimensions during the training process, to effectively perform model training and parameter aggregation, to avoid training errors and model performance degradation due to mismatch of feature dimensions, and to achieve comparable and consistent analysis and comparison among participants, thus effectively promoting the application of federated learning. Meanwhile, to simulate the different sample sizes of each participating platform in the real scenario, we divided the Guoxin Youyi platform into 3 local clients according to the proportions of 50%, 30%, and 20%, and Jingdong Wanxiang platform was divided into 2 local clients according to the proportions of 60% and 40%, which were named P1, P2, P3, P4 and P5 respectively. Finally, to facilitate the realization of this process, this part of the experiment does not use network communication to simulate the real communication between the server and the client, but simulates the training effect of federated learning in a local loop.
2. Model and parameter settings
In this paper, the deep learning algorithm is selected to train the federated learning model because the deep learning model has strong representation capabilities and can capture complex nonlinear relationships, the deep learning model can easily adapt to different types of training tasks and data, and the deep learning model can share its updated weights in the federated learning settings. In this paper, deep learning structure (DL), recurrent neural network model (RNN), long short-term memory neural network (LSTM), and CNN-LSTM model are selected. The following conditions are set for each model training: the deep learning model adds noise layer, gradient pruning and gradient pruning techniques, the number of clients participating in federated training is 5, the number of local training iterations of the participants is 5, the total number of federated training communication rounds is 100, and the batch size of the dataset is set to 64.

5.3. Experimental Evaluation Indicators

In order to compare the effects of the model before and after quality control, it is necessary to use the evaluation index to calculate the generalization performance of the model. In this paper, the mean square error (MSE) and the coefficient of determination (R-squared, R2) are used as the evaluation indicators of the experiment, firstly, the federated scenario in this paper is a regression model case, so the R2 value is used as the key index to evaluate the performance of the model. Second, as the loss function of the model, the MSE value can effectively guide the model to minimize the difference between the predicted value and the actual value during the training process. The smaller the MSE, the better the generalization performance of the model, and the larger the R2 value, the better the model training effect.
(1) Mean square error: The expected value of the sum of squares of the deviation between the output value of the model and the actual observed value, that is, the predicted value and the actual value of each sample are squared and the average value is taken.
M S E = 1 n i = 1 n y t e s t i y p r e d i 2
(2) Coefficient of Determination: An index used to evaluate the goodness of fit of a regression model, generally used to evaluate the degree of agreement between the predicted value and the actual value.
R 2 = 1 i y t e s t i y p r e d i 2 i y ¯ y t e s t i 2

5.4. Experimental Results

(1) The results of the first stage of local quality governance
Through computational analysis, it is found that among the participants in federated learning, each participant faces issues such as inconsistent data standards, outliers, and missing values, resulting in uneven data quality among participants. Therefore, the first-stage quality governance method is adopted to assess the quality of participants’ raw data and optimize it to enhance the quality of the data participating in the federated learning modeling. The changes in quality indicators were obtained through the computational formulas and optimization methods of the local quality governance process in the first stage as shown in Table 4, arrows indicate an improvement in the index score. As shown in the table, the data quality score has been significantly improved through the data quality assessment formula in Section 4.3 and the optimized governance process.
(2) Evaluation of quality governance effect
In this paper, we use (1) the model training effect of the original dataset without basic data quality governance as the comparison baseline of this experiment, (2) the model training effect of the dataset after the first stage of local data quality governance, and (3) the model training effect after the second stage of DQ-FedAvg algorithm for comparative analysis. We analyzed before and after applying Local DQ governance and Federated DQ governance to five clients, and used several prediction models including Deep Learning (DL), CNN-LSTM models, long short-term memory neural networks (LSTM), and recurrent neural network models (RNN). The first is to compare the performance changes of the model before and after local quality governance, as shown in Figure 2 and Figure 3, in the case of the dataset is not governed, the data of each participant cannot directly participate in the training, and the noise, inconsistency, missing values and other problems in the dataset will have a great negative impact on the performance of the model, where red in Figure 2 indicates R2 as a negative value, that is, the model training effect is extremely poor. The second is to compare the changes in model performance before and after the federated quality governance, as shown in Figure 3 and Figure 4, through the second phase of federated training quality governance measures, we can see that the model performance has generally improved to a certain extent, and the model performance can be improved by up to 19%. Among them, Fed_DL and Fed_LSTM have good accuracy in all experiments, mainly because the deep learning network and LSTM can learn richer and abstract feature representations, and can better adapt to complex data structures.
In addition, in order to verify the significance of the difference in model performance before and after governance, we chose the paired sample t-test, because the models before and after governance were evaluated on the same test set, so we regarded the performance of this group of comparison models as paired samples, and the statistical analysis results (p < 0.05, t = 16.43) showed that the model R2 value after the implementation of quality governance was higher than that of the model before governance, and this difference was significant.
(3) Ablation analysis
In addition, to explore the impact of different quality treatment processes on the model, we designed a series of ablation experiments to systematically evaluate the contribution of each stage to the model performance. We define three forms of ablation: FL refers to the federated learning basic model experiment after local quality governance, FL+OH refers to the federated training after using the outlier processing mechanism, FL+MQ refers to the federated experiment using model quality assessment to adjust the parameters, and FL+OH+MQ refers to the federated experiment under both outlier processing and model quality evaluation. In this paper, we use the Mean Square Error (MSE) as the loss function, by gradually removing different components of the model, we record and compare the MSE values under each configuration in federated learning training, and plot the MSE comparison chart of 100 rounds of communication as shown in Figure 5, we observe that the model performance shows sharp fluctuations under different experimental conditions, but in general, the MSE values of all configurations with added modules are lower than the FL of the base model, which indicates that each added module helps to improve the prediction accuracy of the model.

6. Conclusions

Quality is fundamental to ensuring model validity, and there are no current proposals for quality governance across multiple organizations. Addressing this challenge, the paper proposes a “Two-stage” quality governance framework for federated collaboration scenarios. This framework analyzes and governs the quality problems in federated collaboration and adopts different quality governance processes according to stages. Finally, it is verified through case analysis that the two-stage quality governance approach significantly improves the overall performance and fairness of participants in a federated collaboration scenario. This two-stage governance process not only improves data quality, but also optimizes the overall performance of the federated learning system, further validating the value of data quality governance in federated collaboration.
The limitations of the article are as follows: the assumptions of the article are based on the fact that the participants are honest, but in reality, there will be malicious participants who may intentionally provide low-quality or misleading data, which will affect the performance of the final global model. In addition, this article realizes communication through multiple rounds of iterations without considering the actual communication and computational efficiency of the parties. In the future, further research can be conducted in the following areas: first, we can consider introducing federal learning task client selection in the quality governance process, and utilize high-quality client selection based on data quality and game theory to improve the quality governance effect and model performance. Second, in the future, we can further explore how the long-tail effect of participants affects the development of federated learning tasks and how to deal with this long-tail effect so that federated learning tasks can be executed efficiently. The third is how the quality governance framework can be applied to a wider range of domains.

Author Contributions

Conceptualization, J.S. and S.Z.; methodology, J.S.; software, S.Z.; validation, J.S., S.Z. and F.X.; formal analysis, J.S.; investigation, J.S.; resources, S.Z.; data curation, S.Z.; writing—original draft preparation, J.S. and S.Z.; writing—review and editing, J.S., S.Z. and F.X.; visualization, S.Z.; supervision, J.S.; project administration, F.X.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the National Natural Science Foundation of China (Nos. 72464018); Yunnan Province Applied Basic Research Key Program (Nos. 202401AS070112); Kunming University of Science and Technology Humanities and Social Sciences Cultivation Key Program (Nos. SKPY2D01).

Data Availability Statement

The data that support the findings of this study were collected from public sources via web scraping. These data are not publicly available because no platform is provided to store the data. However, a description of the data collection sources and the relevant data sources is provided in the manuscript. Researchers interested in accessing these data can contact the corresponding author for more information.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Peregrina, J.A.; Ortiz, G.; Zirpins, C. Data Governance for Federated Machine Learning in Secure Web-Based Systems. In Actas de las Jornadas de Investigación Predoctoral en Ingeniería Informática: Proceedings of the Doctoral Consortium in Computer Science (JIPII 2021), Online, 3–29 September 2021; Universidad de Cádiz: Puerto Real, Spain, 2021; pp. 36–39. ISBN 978-84-89867-47-5. [Google Scholar]
  2. Janssen, M.; Brous, P.; Estevez, E.; Barbosa, L.S.; Janowski, T. Data Governance: Organizing Data for Trustworthy Artificial Intelligence. Gov. Inf. Q. 2020, 37, 101493. [Google Scholar] [CrossRef]
  3. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The Future of Digital Health with Federated Learning. Npj Digit. Med. 2020, 3, 1–7. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Z.; Zheng, D.; Zhang, C.; Huang, L.; Chen, G.; Huang, W. Literature Review on Federated Learning Application: Based on “Element-Process” Framework-All Databases. J. Ind. Eng. Eng. Manag. 2023, 38, 14–30. [Google Scholar]
  5. Thiruneelakandan, A.; Umamageswari, A. Federated Learning Approach for Analyzing Electric Vehicle Sales in the Indian Automobile Market. In Proceedings of the 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), Chennai, India, 1 November 2023; pp. 1–6. [Google Scholar]
  6. Jia, Y.; Xiong, L.; Fan, Y.; Liang, W.; Xiong, N.; Xiao, F. Blockchain-Based Privacy-Preserving Multi-Tasks Federated Learning Framework. Connect. Sci. 2024, 36, 2299103. [Google Scholar] [CrossRef]
  7. Liu, S.; Xu, X.; Wang, M.; Wu, F.; Ji, Y.; Zhu, C.; Zhang, Q. FLGQM: Robust Federated Learning Based on Geometric and Qualitative Metrics. Appl. Sci. 2024, 14, 351. [Google Scholar] [CrossRef]
  8. Yang, X.; Li, T. A Blockchain-Based Federated Learning Framework for Secure Aggregation and Fair Incentives. Connect. Sci. 2024, 36, 2316018. [Google Scholar] [CrossRef]
  9. Xu, S.; Xia, H.; Zhang, R.; Liu, P.; Fu, Y. FedNor: A Robust Training Framework for Federated Learning Based on Normal Aggregation. Inf. Sci. 2024, 684, 121274. [Google Scholar] [CrossRef]
  10. Kim, D.S.; Ahmad, S.; Whangbo, T.K. Federated Regressive Learning: Adaptive Weight Updates through Statistical Information of Clients. Appl. Soft Comput. 2024, 166, 112043. [Google Scholar] [CrossRef]
  11. Prigent, C.; Costan, A.; Antoniu, G.; Cudennec, L. Enabling Federated Learning across the Computing Continuum: Systems, Challenges and Future Directions. Future Gener. Comput. Syst. 2024, 160, 767–783. [Google Scholar] [CrossRef]
  12. Morbey, G. Data Quality for Decision Makers: A Dialog between a Board Member and a DQ Expert, 2nd ed.; Springer Gabler: Wiesbaden, Germany, 2013; ISBN 978-3-658-01822-1. [Google Scholar]
  13. Liu, Y.; Ma, S.; Yang, Z.; Zou, X.; Shi, S. A Data Quality and Quantity Governance for Machine Learning in Materials Science. J. Chin. Ceram. Soc. 2023, 51, 427–437. [Google Scholar]
  14. Pahl, C.; Azimi, S.; Barzegar, H.R.; Ioini, N.E. A Quality-Driven Machine Learning Governance Architecture for Self-Adaptive Edge Clouds. In Proceedings of the CLOSER 2022—12th International Conference on Cloud Computing and Services Science, Virtual, 26–28 April 2022; pp. 305–312. [Google Scholar]
  15. Peregrina, J.A.; Ortiz, G.; Zirpins, C. Towards Data Governance for Federated Machine Learning. In Proceedings of the Advances in Service-Oriented and Cloud Computing, Virtual, 22–24 March 2022; Zirpins, C., Ortiz, G., Nochta, Z., Waldhorst, O., Soldani, J., Villari, M., Tamburri, D., Eds.; Springer Nature: Cham, Switzerland, 2022; pp. 59–71. [Google Scholar]
  16. Peregrina, J.A.; Ortiz, G.; Zirpins, C. Towards a Metadata Management System for Provenance, Reproducibility and Accountability in Federated Machine Learning. In Proceedings of the Advances in Service-Oriented and Cloud Computing, Virtual, 22–24 March 2022; Zirpins, C., Ortiz, G., Nochta, Z., Waldhorst, O., Soldani, J., Villari, M., Tamburri, D., Eds.; Springer Nature: Cham, Switzerland, 2022; pp. 5–18. [Google Scholar]
  17. Navaz, A.N.; Serhani, M.A.; El Kassabi, H.T. Federated Quality Profiling: A Quality Evaluation of Patient Monitoring at the Edge. In Proceedings of the 2022 International Wireless Communications and Mobile Computing (IWCMC), Dubrovnik, Croatia, 30 May–3 June 2022; pp. 1015–1021. [Google Scholar]
  18. Chen, L.; Guo, Y.; Ge, C.; Zheng, B.; Gao, Y. Cross-Source Data Error Detection Approach Based on Federated Learning. J. Softw. 2023, 13, 1126–1147. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Chen, G.; Xu, Y.; Huang, L.; Zhang, C.; Xiao, S. FedDQA: A Novel Regularization-Based Deep Learning Method for Data Quality Assessment in Federated Learning. Decis. Support Syst. 2024, 180, 114183. [Google Scholar] [CrossRef]
  20. Jeon, K.-C.; Han, G.-S.; Han, C.-Y.; Chong, I. Federated Learning Model for Contextual Sensitive Data Quality Applications: Healthcare Use Case. In Proceedings of the 2023 31st Signal Processing and Communications Applications Conference (SIU), Istanbul, Turkey, 5–8 July 2023; pp. 1–4. [Google Scholar]
  21. Bejenar, I.; Ferariu, L.; Pascal, C.; Caruntu, C.-F. Aggregation Methods Based on Quality Model Assessment for Federated Learning Applications: Overview and Comparative Analysis. Mathematics 2023, 11, 4610. [Google Scholar] [CrossRef]
  22. Li, R.; Wang, Z.; Fang, L.; Peng, C.; Wang, W.; Xiong, H. Efficient Blockchain-Assisted Distributed Identity-Based Signature Scheme for Integrating Consumer Electronics in Metaverse. IEEE Trans. Consum. Electron. 2024, 70, 3770–3780. [Google Scholar] [CrossRef]
  23. Wang, W.; Han, Z.; Gadekallu, T.R.; Raza, S.; Tanveer, J.; Su, C. Lightweight Blockchain-Enhanced Mutual Authentication Protocol for UAVs. IEEE Internet Things J. 2024, 11, 9547–9557. [Google Scholar] [CrossRef]
  24. Wang, W.; Yang, Y.; Yin, Z.; Dev, K.; Zhou, X.; Li, X.; Qureshi, N.M.F.; Su, C. BSIF: Blockchain-Based Secure, Interactive, and Fair Mobile Crowdsensing. IEEE J. Sel. Areas Commun. 2022, 40, 3452–3469. [Google Scholar] [CrossRef]
  25. Wang, R.Y.; Strong, D.M. Beyond Accuracy: What Data Quality Means to Data Consumers. J. Manag. Inf. Syst. 1996, 12, 5–33. [Google Scholar] [CrossRef]
  26. Stahl, F.; Vossen, G. Data Quality Scores for Pricing on Data Marketplaces. In Proceedings of the Intelligent Information and Database Systems, Da Nang, Vietnam, 14–16 March 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 215–224. [Google Scholar]
  27. Yang, Y.; Yuan, Y.; Li, B. Data Quality Evaluation: Methodology and Key Factors. In Proceedings of the Smart Computing and Communication, Erode, India, 14–15 December 2018; Qiu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 222–230. [Google Scholar]
  28. Cai, L.; Zhu, Y. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 2015, 14, 2. [Google Scholar] [CrossRef]
  29. Xiaojuan, B.; Shurong, N.; Zhaolin, X.; Peng, C. Novel Method for the Evaluation of Data Quality Based on Fuzzy Control. J. Syst. Eng. Electron. 2008, 19, 606–610. [Google Scholar] [CrossRef]
  30. An, X.; Huang, J.; Xu, J.; Wang, L.; Hong, X.; Wang, Z.; Han, X. Construction of Panoramic Big Data Quality Evaluation Indicator Framework-All Databases. J. Manag. Sci. China 2023, 26, 138–153. [Google Scholar]
  31. Huang, Q.; Zhao, Z.; Liu, Z. Comprehensive Management System and Technical Framework of Data Quality in the Data Circulation Transaction Scenario-All Databases. Data Anal. Knowl. Discov. 2022, 6, 22–34. [Google Scholar]
  32. Batini, C.; Cappiello, C.; Francalanci, C.; Maurino, A. Methodologies for Data Quality Assessment and Improvement. ACM Comput. Surv. 2009, 41, 16:1–16:52. [Google Scholar] [CrossRef]
  33. Lin, W.-C.; Tsai, C.-F. Missing Value Imputation: A Review and Analysis of the Literature (2006–2017). Artif. Intell. Rev. 2020, 53, 1487–1509. [Google Scholar] [CrossRef]
  34. Xia, W.; Jiang, H.; Feng, D.; Douglis, F.; Shilane, P.; Hua, Y.; Fu, M.; Zhang, Y.; Zhou, Y. A Comprehensive Study of the Past, Present, and Future of Data Deduplication. Proc. IEEE 2016, 104, 1681–1710. [Google Scholar] [CrossRef]
  35. Hodge, V.; Austin, J. A Survey of Outlier Detection Methodologies. Artif. Intell. Rev. 2004, 22, 85–126. [Google Scholar] [CrossRef]
  36. Sun, Y.; Zhao, G.; Liao, Y. Evolutionary Game Model for Federated Learning Incentive Optimization-All Databases. J. Chin. Comput. Syst. 2024, 45, 718–725. [Google Scholar]
  37. Shalabi, L.A.; Shaaban, Z.; Kasasbeh, B. Data Mining: A Preprocessing Engine. J. Comput. Sci. 2006, 2, 735–739. [Google Scholar] [CrossRef]
  38. Gao, L.; Li, L.; Chen, Y.; Zheng, W.; Xu, C.; Xu, M. FIFL: A Fair Incentive Mechanism for Federated Learning. In ICPP ′21: Proceedings of the 50th International Conference on Parallel Processing, Lemont, IL, USA, 9–12 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–10. [Google Scholar]
Figure 1. Diagram of the two-phase data quality governance framework.
Figure 1. Diagram of the two-phase data quality governance framework.
Electronics 13 03606 g001
Figure 2. Multi-model performance graph before data quality governance.
Figure 2. Multi-model performance graph before data quality governance.
Electronics 13 03606 g002
Figure 3. Comparison of multi-model performance after local quality governance.
Figure 3. Comparison of multi-model performance after local quality governance.
Electronics 13 03606 g003
Figure 4. Comparison of multiple models after quality governance in the federated training stage.
Figure 4. Comparison of multiple models after quality governance in the federated training stage.
Electronics 13 03606 g004
Figure 5. Ablation analysis (MSE) results.
Figure 5. Ablation analysis (MSE) results.
Electronics 13 03606 g005
Table 1. A comprehensive comparison between this paper and existing studies.
Table 1. A comprehensive comparison between this paper and existing studies.
StudyTopicContributionsLimitations
[15,16]This paper discusses how to improve the quality, traceability, and stability of a federated learning system through a data governance framework.Propose a preliminary data governance architecture, covering model quality assessment, access control, etc. [15]
A comprehensive framework combining data governance and metadata management is proposed for federated machine learning [16]
It is limited to theoretical models, has not been practiced, and lacks verification in practical scenarios.
[17]The methods of quality assessment of patient monitoring data in an edge computing environment were discussed.A federated data quality analysis model was proposed, and the effect of FDQ analysis in improving data quality was demonstrated through experiments.The main focus is on the quality of the dataset. The data quality dimensions considered are incomplete.
[18]This paper discusses how to use federated learning technology to improve the accuracy of cross-source data error detection under the premise of protecting data privacy.The FeLeDetect method is proposed to improve error detection accuracy under the premise of privacy protection by using cross-source data, which directly promotes the improvement of data quality.There is no explicit mention of the limitations of the method in error detection for specific types of data.
[19]This paper discusses how to assess the quality of cross-source data while protecting data privacy.The proposed method evaluates and controls the data quality according to the training task, prevents the bias of the dataset from affecting the decision-making, and enhances the robustness and effectiveness of the business decision-making of the federated learning service.The adaptability of FedDQA to other federated learning constructs and low-quality data warrants further investigation.
[20]Explores how to evaluate and improve data quality in a federated learning environment for specific business applications, such as healthcare scenarios.A new federated learning framework considering the quality of data in the local context is proposed.The generalizability of the model and its applicability to other fields have not been verified.
oursData quality issues across the federal collaboration scenarios are discussed and respective governance strategies are adopted.This paper proposes a “Two-stage” data quality governance framework based on the problems of local data quality and model quality in the federated training process, and verifies the feasibility of the framework and the effectiveness of the method through experiments.There is no discussion of what kind of data quality governance should be adopted when there are malicious actors.
Table 2. Selection of data quality indicators in the “First stage”.
Table 2. Selection of data quality indicators in the “First stage”.
One-DimensionalTwo-DimensionalThe Reason Why These Metrics Were Chosen in the Federated Cooperation Scenario
Intrinsic qualityThe amount of dataIn federated learning scenarios, the amount of training data of different participants is inconsistent, and the results will also be inconsistent (both the quantity and quality of training data will affect the accuracy of the model)
Data IntegrityIn the federated learning scenario, some fields may be missing in the data that each client participates in training, so it is necessary to check the missing values and comprehensively consider the integrity of the dataset provided by the participants
Data duplicationIn federated training, repeatability evaluation can help us filter the data with duplicate records in the participating datasets, so as to better manage data quality and provide the generalization ability of the model
Data accuracyIn federated learning, accuracy evaluation can help us determine whether the data trained by the model is authentic and reliable, so as to avoid the degradation of model performance due to data error
Data consistencyIn federated learning, because the data is distributed among different participants, consistency evaluation is used to ensure that the model can accurately extract information from the data
Degree of structureIn federated learning, structured data is good for better training of models, and if it is unstructured data, more preprocessing work is required, which affects efficiency and the smooth flow of the cooperation process
Contextual qualityTimelinessIn federated learning, timeliness evaluation can help us determine whether the data is fresh enough to reflect the current situation to ensure that
Table 4. Table of changes in quality indicators.
Table 4. Table of changes in quality indicators.
IndexContrastP1P2P3P4P5
(Integrity) S i previous1.00001.00001.00000.96730.9628
after1.00001.00001.00001.0000 ↑1.0000 ↑
(Repeatability) S r previous1.00001.00001.00000.95400.9686
after1.00001.00001.00001.0000 ↑1.0000 ↑
(Consistency) S c previous0.32950.32890.32620.22810.2093
after1.0000 ↑1.0000 ↑0.9991 ↑1.0000 ↑1.0000 ↑
(Structured) S s previous0.82350.82350.82350.50000.5000
after0.9231 ↑0.9231 ↑0.9231 ↑0.8846 ↑0.8846 ↑
(Accuracy) S a previous0.97290.97420.98250.97750.9743
after0.9786 ↑0.9778 ↑0.9886 ↑0.9811↑0.9875 ↑
(Timeliness) S t 0.03510.03530.03380.56460.5578
(Number) S n 0.44080.26440.17650.07080.0474
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, J.; Zhou, S.; Xiao, F. Research on Data Quality Governance for Federated Cooperation Scenarios. Electronics 2024, 13, 3606. https://doi.org/10.3390/electronics13183606

AMA Style

Shen J, Zhou S, Xiao F. Research on Data Quality Governance for Federated Cooperation Scenarios. Electronics. 2024; 13(18):3606. https://doi.org/10.3390/electronics13183606

Chicago/Turabian Style

Shen, Junxin, Shuilan Zhou, and Fanghao Xiao. 2024. "Research on Data Quality Governance for Federated Cooperation Scenarios" Electronics 13, no. 18: 3606. https://doi.org/10.3390/electronics13183606

APA Style

Shen, J., Zhou, S., & Xiao, F. (2024). Research on Data Quality Governance for Federated Cooperation Scenarios. Electronics, 13(18), 3606. https://doi.org/10.3390/electronics13183606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop