HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis
Abstract
:1. Introduction
- 1.
- We propose HT-Fed-GAN, a novel, practical federated conditional generative approach for synthesizing decentralized tabular data. To the best of our knowledge, this is the first work on privacy-preserving data synthesis on decentralized tabular datasets.
- 2.
- We propose the federated variational Bayesian Gaussian mixture model for extracting the multimodal distributions from decentralized tables without disclosing data, which can effectively eliminate mode collapse.
- 3.
- We propose the federated conditional one-hot encoding and conditional sampling method for highly imbalanced categorical columns, which preserves the real distributions of the categorical attributes.
- 4.
- We conduct experiments on five real-world datasets and the results demonstrate that HT-Fed-GAN offers desirable data with a high privacy level.
2. Related Works
3. Preliminaries
3.1. Variational Bayesian Gaussian Mixture Model
3.2. Differential Privacy
3.3. Membership Inference Attack
4. HT-Fed-GAN
- 1.
- Data Encoding. To eliminate the multiple-mode synthesis problem, the federated variational Bayesian Gaussian mixture model (Fed-VB-GMM) (Algorithm 1) is designed to learn the multimodal distributions in continuous columns from each client. Then, the continuous columns are encoded to become input representations using Equation (24), which uses the extracted multimodal distributions as prior knowledge. In addition, a federated conditional one-hot encoding method is proposed to encode the discrete columns using Equation (25). Finally, each sample in the decentralized tables is represented as the representation shown in Equation (26).
- 2.
- Federated GAN Training. To alleviate the problem of categorical imbalances across the clients, a conditional sampling method is proposed to rebalance the categorical distributions during federated training. In addition, to prevent the privacy leakage caused by the federated conditional GAN, a privacy consumption-based federated conditional GAN training algorithm (Algorithm 2) is presented to flexibly control the privacy level of GAN.
4.1. Federated Variational Bayesian Gaussian Mixture Model
Algorithm 1: Fed-VB-GMM Algorithm | |
Input: DataSet Output: Step 1 (Initialization): Server
Step 2 (Variational E step): Client
Step 3 (Variational M step): Server
|
Step 4 (Variational M step): Client
Step 5 (Convergence Check): Server
|
Algorithm 2: Training Algorithm of HT-Fed-GAN |
4.2. Federated Conditional One-Hot Encoding and Conditional Sampling
Algorithm 3: Federated feature-aligning algorithm |
4.3. Privacy Consumption-Based Federated Conditional GAN
4.4. Privacy Analysis
4.5. Membership Inference Attack for HT-Fed-GAN
- 1.
- Let be a target HT-Fed-GAN that an attacker wants to attack. Note that access to the target model is not allowed because it is not shared with the public after being trained.
- 2.
- Obtain the released synthetic table generated by , which is the positive training sample for the attack model, as shown in Figure 4. Each record of the synthetic table is denoted as , where 1 represents this record in the training table of the target model.
- 3.
- We choose ctgan [12] as the attack model and mark it as . and represent the generator and discriminator of the attack model, respectively. The discriminator of is used to determine whether the record is the training sample of the target model. The synthetic table generated by is marked as 0, indicating that the data are not a training sample of the target model.
- 4.
- Train the attack model using the released synthetic table.
- 5.
- Construct the test dataset to evaluate the performance of the attack model. The ratio of real to fake records in the test dataset is 1:1.
5. Experiments
5.1. Experiment Setup
5.1.1. Environment Setup
5.1.2. Dataset
- The Adult [1] dataset comes from the 1994 Census database and contains a lot of personal information (such as income, work hours per week, education, and so on). The income attribute represents the salary for each individual, which is greater than 50 K or less than 50 K. Thus, we performed a binary classification test using this attribute. For the regression tasks, we chose the hours_per_week attribute as the target label, which represents the number of work hours per week.
- The Intrusion [15] dataset comes from a network intrusion detection competition and consists of a wide variety of intrusions simulated in a military network environment. The label attribute represents the type of intrusion, which contains seven classes. Thus, we performed multi-class classification tests using the label attribute. For the regression tasks, we used the count attribute.
- The Credit [37] dataset comes from a Kaggle competition named Credit Fraud Detection and contains transactions made using credit cards in September 2013 by European cardholders. The label attribute indicates whether the record is fraudulent. The amount attribute has the information on the transaction amount. Thus, we performed binary classification and regression tests using the two attributes, respectively.
- The Cover-Type [38] dataset is derived from the US Geological Survey (USGS) and US Forest Service (USFS). The label attribute represents the forest cover type and contains seven classes. The elevation attribute is the elevation of the forest. Thus, we performed multi-class classification and regression tests using this dataset.
- The Health [39] dataset is about cardiovascular diseases (CVDs),and contains 12 features used to predict mortality due to heart failure. The DEATH_EVENT attribute represents whether the patient died during the follow-up period. Thus, we performed a binary classification test using this attribute and used the age attribute for the regression tests.
5.1.3. Evaluation Method
- The data-utility-related evaluation metrics are as follows:
- –
- Cumulative distributions: We compared the cumulative distributions for the same attribute between the original data and synthesized data [4]. This mainly compared the statistical similarity between the original data and the synthesized data.
- –
- Machine Learning Score: SDGym [12] is a framework to benchmark the performance of synthetic data generators. We used SDGym to train the machine learning models on the synthetic data and test the trained model on the original dataset. In the training process, the machine learning models and their parameters were fixed in each dataset. We evaluated the performance of the classification tasks using the F1 score (binary classification tasks) or the macro- and micro-F1 scores (multi-class classification tasks) and the regression tasks using the mean absolute error (MAE). For each dataset, we performed four classifiers or regressors to evaluate the performance of the machine learning task.
- The privacy-related evaluation metrics are as follows:
- –
- Membership Inference Attack: We customized the black-box membership inference attack presented in [16] to evaluate the privacy of our HT-Fed-GAN without any auxiliary information other than the synthetic data. The detailed procedure is described in Section 4.5.
5.1.4. Baseline Model
5.1.5. Parameter Setup
5.2. Cumulative Distributions
5.3. Machine Learning Scores
5.3.1. Binary Classification
5.3.2. Multi-Classification
5.3.3. Regression
5.4. Results for Privacy
5.5. Multimodal Distribution Study
5.6. Synthesis Example
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
GAN | Generative Adversarial Network |
GMM | Gaussian Mixture Model |
VB-GMM | Variational Bayesian Gaussian Mixture Model |
MAE | Mean Absolute Error |
DP | Differential Privacy |
References
- Kohavi, R. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In KDD 1996 Proceedings; AAAI Press: Portland, OR, USA, 1996; Volume 96, pp. 202–207. [Google Scholar]
- McFee, B.; Bertin-Mahieux, T.; Ellis, D.P.; Lanckriet, G.R. The million song dataset challenge. In Proceedings of the 21st International Conference on World Wide Web, Lyon, France, 16–20 April 2012; pp. 909–916. [Google Scholar]
- Shi, B.; Yao, C.; Liao, M.; Yang, M.; Xu, P.; Cui, L.; Belongie, S.; Lu, S.; Bai, X. ICDAR2017 competition on reading chinese text in the wild (RCTW-17). In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; Volume 1, pp. 1429–1434. [Google Scholar]
- Park, N.; Mohammadi, M.; Gorde, K.; Jajodia, S.; Park, H.; Kim, Y. Data synthesis based on generative adversarial networks. In Proceedings of the VLDB Endowment 2018, Rio de Janeiro, Brazil, 27–31 August 2018; Volume 11, pp. 1071–1083. [Google Scholar]
- Jordon, J.; Yoon, J.; Van Der Schaar, M. PATE-GAN: Generating synthetic data with differential privacy guarantees. In Proceedings of the International Conference on Learning Representations, New Orleans, OR, USA, 6–9 May 2019. [Google Scholar]
- Frigerio, L.; de Oliveira, A.S.; Gomez, L.; Duverger, P. Differentially private generative adversarial networks for time series, continuous, and discrete open data. In Proceedings of the IFIP International Conference on ICT Systems Security and Privacy Protection, Lisbon, Portugal, 25–27 June 2019; pp. 151–164. [Google Scholar]
- Zhang, J.; Cormode, G.; Procopiuc, C.M.; Srivastava, D.; Xiao, X. Privbayes: Private data release via bayesian networks. ACM Trans. Database Syst. (TODS) 2017, 42, 1–41. [Google Scholar] [CrossRef] [Green Version]
- Augenstein, S.; McMahan, H.B.; Ramage, D.; Ramaswamy, S.; Kairouz, P.; Chen, M.; Mathews, R.; y Arcas, B.A. Generative Models for Effective ML on Private, Decentralized Datasets. In Proceedings of the International Conference on Learning Representations, New Orleans, OR, USA, 6–9 May 2019. [Google Scholar]
- Chang, Q.; Qu, H.; Zhang, Y.; Sabuncu, M.; Chen, C.; Zhang, T.; Metaxas, D.N. Synthetic learning: Learn from distributed asynchronized discriminator gan without sharing medical image data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 13856–13866. [Google Scholar]
- Qu, H.; Zhang, Y.; Chang, Q.; Yan, Z.; Chen, C.; Metaxas, D. Learn distributed GAN with Temporary Discriminators. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 175–192. [Google Scholar]
- Triastcyn, A.; Faltings, B. Federated Generative Privacy. In Proceedings of the IJCAI Workshop on Federated Machine Learning for User Privacy and Data Confidentiality (FML 2019), Macau, China, 12 August 2019. [Google Scholar]
- Xu, L.; Skoularidou, M.; Cuesta-Infante, A.; Veeramachaneni, K. Modeling Tabular data using Conditional GAN. Adv. Neural Inf. Process. Syst. 2019, 32, 7335–7345. [Google Scholar]
- Fan, J.; Chen, J.; Liu, T.; Shen, Y.; Li, G.; Du, X. Relational data synthesis using generative adversarial networks: A design space exploration. Proc. VLDB Endow. 2020, 13, 1962–1975. [Google Scholar] [CrossRef]
- Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.C.; Yang, Q.; Niyato, D.; Miao, C. Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun. Surv. Tutorials 2020, 22, 2031–2063. [Google Scholar] [CrossRef] [Green Version]
- Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A detailed analysis of the KDD CUP 99 data set. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; pp. 1–6. [Google Scholar]
- Hayes, J.; Melis, L.; Danezis, G.; De Cristofaro, E. Logan: Membership inference attacks against generative models. Proc. Priv. Enhancing Technol. 2019, 2019, 133–152. [Google Scholar] [CrossRef] [Green Version]
- Armanious, K.; Jiang, C.; Fischer, M.; Küstner, T.; Hepp, T.; Nikolaou, K.; Gatidis, S.; Yang, B. MedGAN: Medical image translation using GANs. Comput. Med. Imaging Graph. 2020, 79, 101684. [Google Scholar] [CrossRef] [PubMed]
- Hardy, C.; Le Merrer, E.; Sericola, B. Md-gan: Multi-discriminator generative adversarial networks for distributed datasets. In Proceedings of the 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Janeiro, Brazil, 20–24 May 2019; pp. 866–877. [Google Scholar]
- Guerraoui, R.; Guirguis, A.; Kermarrec, A.M.; Merrer, E.L. FeGAN: Scaling Distributed GANs. In Proceedings of the 21st International Middleware Conference, Delft, The Netherlands, 7–11 December 2020; pp. 193–206. [Google Scholar]
- Fan, C.; Liu, P. Federated generative adversarial learning. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Nanjing, China, 16–18 October 2020; pp. 3–15. [Google Scholar]
- Xin, B.; Yang, W.; Geng, Y.; Chen, S.; Wang, S.; Huang, L. Private fl-gan: Differential privacy synthetic data generation based on federated learning. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2927–2931. [Google Scholar]
- Dwork, C.; Kenthapadi, K.; McSherry, F.; Mironov, I.; Naor, M. Our data, ourselves: Privacy via distributed noise generation. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, 28 May–1 June 2006; pp. 486–503. [Google Scholar]
- Nishimoto, H.; Nakada, T.; Nakashima, Y. GPGPU Implementation of Variational Bayesian Gaussian Mixture Models. In Proceedings of the 2019 Seventh International Symposium on Computing and Networking (CANDAR), Nagasaki, Japan, 26–29 November 2019; pp. 185–190. [Google Scholar]
- Corduneanu, A.; Bishop, C.M. Variational Bayesian model selection for mixture distributions. In Artificial intelligence and Statistics; Morgan Kaufmann: Waltham, MA, USA, 2001; Volume 2001, pp. 27–34. [Google Scholar]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017; pp. 3–18. [Google Scholar]
- Mortici, C. New approximations of the gamma function in terms of the digamma function. Appl. Math. Lett. 2010, 23, 97–100. [Google Scholar] [CrossRef] [Green Version]
- Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 2017, 13, 1333–1345. [Google Scholar]
- Lanczos, C. A precision approximation of the gamma function. J. Soc. Ind. Appl. Math. Ser. B Numer. Anal. 1964, 1, 86–96. [Google Scholar] [CrossRef]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved training of wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5769–5779. [Google Scholar]
- Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Li, J.; Lyu, L.; Liu, X.; Zhang, X.; Lyu, X. FLEAM: A federated learning empowered architecture to mitigate DDoS in industrial IoT. IEEE Trans. Ind. Inform. 2021, 18, 4059–4068. [Google Scholar] [CrossRef]
- Tolpegin, V.; Truex, S.; Gursoy, M.E.; Liu, L. Data poisoning attacks against federated learning systems. In Proceedings of the European Symposium on Research in Computer Security, Guildford, UK, 14–18 September 2020; pp. 480–501. [Google Scholar]
- Duan, S.; Liu, C.; Cao, Z.; Jin, X.; Han, P. Fed-DR-Filter: Using global data representation to reduce the impact of noisy labels on the performance of federated learning. Future Gener. Comput. Syst. 2022, 137, 336–348. [Google Scholar] [CrossRef]
- Ketkar, N. Introduction to pytorch. In Deep Learning with Python; Springer: Berlin/Heidelberg, Germany, 2017; pp. 195–208. [Google Scholar]
- Dal Pozzolo, A.; Caelen, O.; Johnson, R.A.; Bontempi, G. Calibrating probability with undersampling for unbalanced classification. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 159–166. [Google Scholar]
- Blackard, J.A.; Dean, D.J. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Comput. Electron. Agric. 1999, 24, 131–151. [Google Scholar] [CrossRef] [Green Version]
- Chicco, D.; Jurman, G. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Med. Inform. Decis. Mak. 2020, 20, 16. [Google Scholar] [CrossRef] [PubMed]
Notation | Description |
---|---|
x | A variable of data. |
The mixture weight of the k-th Gaussian distribution. | |
The k-th Gaussian distribution with mean and variance . | |
A Gaussian mixture model. | |
A Dirichlet distribution with parameter . | |
The inverse matrix of . | |
A Wishart distribution with parameters W and . | |
The neighboring dataset of D. | |
Parameters of a differential privacy algorithm. | |
A | A differential privacy algorithm. |
A digamma function. | |
A gamma function. | |
The representation of the j-th record after normalization. | |
⊕ | The concatenate operation. |
Dataset | Number of Records | Number of Attributes | Classification Type | Attribute for Classification | Attribute for Regression | Split Attribute | Client | Number of Training Records | Values of Split Attributes | Number of Testing Records |
---|---|---|---|---|---|---|---|---|---|---|
Adult | 32,561 | 15 | binary classification | income | hours_per_week | education | Client 1 | 12,053 | [‘Doctorate’, ‘Masters’, ‘Bachelors’, ‘Some-college’, ‘Assoc-acdm’, ‘Assoc-voc’] | 9768 |
Client 2 | 9551 | [‘Prof-school’, ‘HS-grad’, ‘12th’, ‘11th’, ‘10th’] | ||||||||
Client 3 | 1189 | [‘Preschool’, ‘1st–4th’, ‘5th–6th’, ‘7th–8th’, ‘9th’] | ||||||||
Intrusion | 494,021 | 41 | multi-class classification | label | count | flag | Client 1 | 264,731 | [’SF’] | 148,206 |
Client 2 | 61,235 | [’S0’, ’SH’, ’S1’, ’S2’, ’S3’] | ||||||||
Client 3 | 19,849 | [’REJ’,’RSTR’, ’RSTO’, ’RSTOS0’, ’OTH’] | ||||||||
Credit | 284,807 | 30 | binary classification | label | Amount | Amount | Client-1 | 159,207 | 85,442 | |
Client 2 | 38,001 | |||||||||
Client 3 | 2157 | |||||||||
Cover-Type | 581,012 | 55 | multi-class classification | Cover_Type | Elevation | Elevation | Client-1 | 919 | 174,303 | |
Client 2 | 204,871 | |||||||||
Client 3 | 200,919 | |||||||||
Health | 299 | 13 | binary classification | DEATH_EVENT | age | age | Client-1 | 31 | 89 | |
Client 2 | 126 | |||||||||
Client 3 | 53 |
Dataset | Origin Table | HT-Fed-GAN | DP-FedAvg-GAN | ||
---|---|---|---|---|---|
Low-Privacy | High-Privacy | Low-Privacy | High-Privacy | ||
Adult | 0.6711 | 0.6067 | 0.5886 | 0.3535 | 0.3595 |
Credit | 0.5328 | 0.6860 | 0.7179 | - | - |
Health | 0.6359 | 0.6603 | 0.5734 | 0.5723 | 0.4494 |
Dataset | Origin Table | HT-Fed-GAN (Low-Privacy) | HT-Fed-GAN (High-Privacy) | Dp-FedAvg-GAN (Low-Privacy) | Dp-FedAvg-GAN (High-Privacy) | |||||
---|---|---|---|---|---|---|---|---|---|---|
Macro F1 | Micro F1 | Macro F1 | Micro F1 | Macro F1 | Micro F1 | Macro F1 | Micro F1 | Macro F1 | Micro F1 | |
Intrusion | 0.8369 | 0.9990 | 0.6151 | 0.9858 | 0.5705 | 0.9765 | 0.3859 | 0.8559 | 0.1906 | 0.7928 |
Covetype | 0.6158 | 0.7419 | 0.4291 | 0.6513 | 0.2826 | 0.5445 | 0.1444 | 0.4492 | 0.1100 | 0.4910 |
Dataset | Origin Table | HT-Fed-GAN | DP-FedAvg-GAN | ||
---|---|---|---|---|---|
Low-Privacy | High-Privacy | Low-Privacy | High-Privacy | ||
Adult | 0.2292 | 0.2411 | 0.2491 | 0.3931 | 0.3422 |
Credit | 1.2383 | 1.4844 | 1.4852 | - | - |
Health | 0.1869 | 0.3136 | 0.3400 | 0.1990 | 0.2467 |
Intrusion | 0.2793 | 0.7522 | 0.3946 | 0.7697 | 1.1211 |
Cover-Type | 0.0441 | 0.0659 | 0.0663 | 0.0754 | 0.1094 |
Dataset | HT-Fed-GAN | DP-FedAvg-GAN | ||
---|---|---|---|---|
Low-Privacy | High-Privacy | Low-Privacy | High-Privacy | |
Adult | 0.511 | 0.4664 | 0.5615 | 0.5409 |
Credit | 0.5666 | 0.4652 | - | - |
Health | 0.5665 | 0.4832 | 0.578 | 0.5128 |
Intrusion | 0.5328 | 0.4308 | 0.5645 | 0.456 |
Covertype | 0.5667 | 0.4652 | 0.5625 | 0.6256 |
Age | fnlwgt | Eduction | Occupation | Hours_per_week | Income | |
---|---|---|---|---|---|---|
1 | 33 | 173,520 | 7th–8th | Other-service | 39 | 50 K |
2 | 51 | 196,863 | Doctorate | Prof-specialty | 59 | >50 K |
3 | 47 | 254,381 | HS-grad | Craft-repair | 39 | 50 K |
4 | 49 | 267,701 | 11th | Craft-repair | 49 | >50 K |
5 | 28 | 198,232 | HS-grad | Handlers-cleaners | 39 | 50 K |
6 | 29 | 164,833 | HS-grad | Sales | 43 | >50 K |
Age | fnlwgt | Eduction | Occupation | Hours_per_week | Income | |
---|---|---|---|---|---|---|
1 | 22 | 48,988 | Some-college | Transport-moving | 40 | 50 K |
2 | 55 | 296,085 | Assoc-acdm | Prof-specialty | 40 | >50 K |
3 | 23 | 170,070 | 12th | Other-service | 38 | 50 K |
4 | 40 | 178,983 | HS-grad | Adm-clerical | 30 | >50 K |
5 | 52 | 370,552 | Preschool | Machine-op-inspect | 40 | 50 K |
6 | 36 | 328,466 | 5th–6th | Transport-moving | 50 | >50 K |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Duan, S.; Liu, C.; Han, P.; Jin, X.; Zhang, X.; He, T.; Pan, H.; Xiang, X. HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis. Entropy 2023, 25, 88. https://doi.org/10.3390/e25010088
Duan S, Liu C, Han P, Jin X, Zhang X, He T, Pan H, Xiang X. HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis. Entropy. 2023; 25(1):88. https://doi.org/10.3390/e25010088
Chicago/Turabian StyleDuan, Shaoming, Chuanyi Liu, Peiyi Han, Xiaopeng Jin, Xinyi Zhang, Tianyu He, Hezhong Pan, and Xiayu Xiang. 2023. "HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis" Entropy 25, no. 1: 88. https://doi.org/10.3390/e25010088
APA StyleDuan, S., Liu, C., Han, P., Jin, X., Zhang, X., He, T., Pan, H., & Xiang, X. (2023). HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis. Entropy, 25(1), 88. https://doi.org/10.3390/e25010088