SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments
Abstract
1. Introduction
- We formalize FL and SL and present , a fused FL (for training) and SL (for inference) partitioned between ML model layers to reduce information leakage. The novelty stems from clients collaborating on partial models (instead of full models in FL) to enhance collaboration while reducing privacy risks; while federation helps improve feature extraction, horizontal splitting allows entities to personalize their models concerning their specific environments, thus improving results.
- implements multi-key FHE with DP during training to protect against clients colluding with each other or a server colluding with clients under an honest-but-curious assumption. reduces time compared to training in silo while upholding privacy with security.
- We propose a novel privacy-preserving counseling process for inference. An entity can request a consensus by submitting an encrypted classification query using single-key FHE with DP to its peers.
- We empirically show that is robust against various threats such as poisoning and inference attacks.
2. Our Framework
2.1. Threat Model
2.2. Proposed Architecture
- can FL with a parameter over model layers, where controls the proportion of layers to collaborate on. Thus, FL is realized when clients collaborate to train all layers of the global ML model M; hence, . A value of indicates no collaboration, thus .
- Transfer Learning (TL) [25,26,27,28,29] is realized for an architecture (e.g., Convolutional Neural Networks-Artificial Neural Networks (CNN-ANN) model) where K distinct clients collaborate to train the first n (e.g., convolution) layers for feature extraction. For inference after training, clients retrain the rest of the (e.g., Fully Connected (FC)) layers of their ML model till convergence on their private data without updating the first n layers (effectively freezing the feature extracting CNN layers).
2.3. Key Generation
2.3.1. Public and Secret Keys
| Algorithm 1 Public and Secret Keys Generation |
| Input: Each client performs iteratively Output: Public and private keypairs for each client 1: 2: for each client k in do 3: 4: end for 5: |
2.3.2. Evaluation Key for Addition
| Algorithm 2 Evaluation Keys Generation for Addition |
| Input: Keypairs for each client Output: Evaluation key for Addition 1: 2: for each client k in do 3: 4: end for 5: 6: for each client k in do 7: 8: end for 9: |
2.3.3. Evaluation Key for Multiplication
2.4. Training Phase
| Algorithm 3 Evaluation Keys Generation for Multiplication |
| Input: Keypairs for each client Output: Evaluation key for Multiplication 1: 2: for each client k in do 3: 4: end for 5: 6: for each client k in do 7: 8: end for 9: for each client k in do 10: 11: end for 12: 13: for each client k in do 14: 15: end for 16: |
2.4.1. Single-Key FHE
2.4.2. Multi-Key FHE
| Algorithm 4 Training |
| Input: Output: Trained models for each client 1: for each round r in do 2: for each client k in do 3: Client trains model 4: end for 5: Server S encodes a vector with values 6: Server S encrypts with 7: for each shared layer in do 8: for each client k in do 9: Client encrypts layer weights with 10: end for 11: Server S adds encrypted vectors to with 12: Server S multiplies to with 13: for each client k in do 14: Client partially decrypts with 15: end for 16: Server S generates fused decryption using 17: for each client k in do 18: Client sets layer weights from fused decryption 19: end for 20: end for 21: end for |
2.5. Inference Phase
- The (voting) clients send a classification label (TL), and the consensus is done on a label majority.
- The (voting) clients send a result of the final activation function (TP), which is summed up, and the label is chosen if the summation is higher than some required threshold.
| Algorithm 5 Inference |
| Input: Data subset from client Output: Class labels or predictions scores from clients 1: Client creates a subset of local data 2: Client generates 3: Client shares with other consensus clients 4: Client generates activations for from cut layer q 5: Client encrypts activations with 6: for each consensus client do 7: Client receives encrypted from 8: Client performs calculation on with 9: Client sends encrypted result back to Client 10: Client decrypts results received from with 11: end for 12: Client considers a majority based on decrypted labels or prediction values received from consensus clients |
2.6. Differential Privacy
3. Security Analysis
3.1. Model Poisoning Attacks
3.2. Inference Attacks
3.2.1. Membership Inference
- First, we attack (input) datasets and their gradients from the split layer to determine their membership.
- We develop another attack model to infer membership from labels or predictions given the gradients from the cut layer.
3.2.2. Model Inversion
3.3. Model Extraction Attacks
4. Experimental Analysis
4.1. Dataset
4.2. Results
5. Discussion
5.1. Differential Privacy and Noise Flooding
5.2. Large Language Model (LLM) Security
6. Conclusions
7. Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Background
| Symbol | Name |
|---|---|
| Encryption | |
| Decryption | |
| Addition | |
| Multiplication | |
| Public Key | |
| Secret Key | |
| Evaluation Key for Addition | |
| Evaluation Key for Multiplication | |
| Evaluation Key for Fused Decryption | |
| Scaling Factor for FHE | |
| Classification Threshold | |
| Activation Function f on Input z | |
| S | Central (Federation) Server |
| K | (Total) Number of Clients |
| k | Client Index () |
| T | (Threshold) Number of Clients required for Fused Decryption () |
| Number of Colluding Clients | |
| k-th Client | |
| Dataset of k-th Client | |
| o | Observation (Record) in a Dataset ( |
| A | Shared Attributes (Features) |
| L | Shared Labels |
| n | Number of Shared Layers |
| Number of Personalized Layers of k-th Client | |
| ML model of k-th Client | |
| Number of Total Layers of k-th Client () | |
| q | Cut (Split) Layer |
| Size of the Cut Layer | |
| Activations from Cut Layer q | |
| Number of Decryption Queries | |
| m | Number of Participants in Training |
| p | Training Participant Index () |
| p-th Participant | |
| Number of Participants in Consensus | |
| h | Consensus Participant Index () |
| R | Number of Training Rounds |
| r | Round Index () |
| Fraction of ML parameters with a Client | |
| Fraction of ML parameters with the Server | |
| Batch Size | |
| Learning Rate |
Appendix A.1. Fully Homomorphic Encryption (FHE)
- : generates a key pair.
- : encrypts a plaintext.
- : decrypts a ciphertext.
- : evaluates an arithmetic operation on ciphertexts (encrypted data).
Appendix A.2. Differential Privacy (DP)
Appendix B. Related Work
Appendix B.1. Federated Learning
- Horizontal Federated Learning (HFL): This configuration is utilized when datasets share a significant overlap in their feature space but possess distinct sample IDs. HFL is often referred to as sample-partitioned federated learning, as it involves organizations that collect similar types of data from different user bases.
- Vertical Federated Learning (VFL): VFL applies to scenarios where participants possess different features for a largely overlapping set of sample IDs. Furthermore, known as feature-partitioned federated learning, this type allows disparate organizations, such as a financial institution and a retail entity, to collaboratively train a model on a shared set of individuals without exchanging raw attributes.
- Federated Transfer Learning (FTL): FTL addresses the most challenging scenario where participating entities share neither a significant portion of the sample IDs nor a common feature space. This approach leverages transfer learning techniques to bridge the gap between heterogeneous domains, allowing knowledge to be extracted from a source domain to improve a model in a distinct target domain.
Appendix B.2. Split Learning
Appendix B.3. Integrating FL with SL
Appendix C. Defenses
Appendix C.1. Model Poisoning
Appendix C.2. Membership Inference
Appendix C.3. Model Inversion
Appendix C.4. Model Extraction
References
- de Montjoye, Y.A.; Radaelli, L.; Singh, V.K.; Pentland, A.S. Unique in the shopping mall: On the reidentifiability of credit card metadata. Science 2015, 347, 536–539. [Google Scholar] [CrossRef]
- Zhang, J.; Li, C.; Qi, J.; He, J. A Survey on Class Imbalance in Federated Learning. arXiv 2023, arXiv:2303.11673. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics; PMLR: London, UK, 2017; pp. 1273–1282. [Google Scholar]
- Gupta, O.; Raskar, R. Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Appl. 2018, 116, 1–8. [Google Scholar] [CrossRef]
- Kumar, D.; Pawar, P.P.; Meesala, M.K.; Pareek, P.K.; Addula, S.R.; K.S., S. Trustworthy IoT Infrastructures: Privacy-Preserving Federated Learning with Efficient Secure Aggregation for Cybersecurity. In Proceedings of the 2024 International Conference on Integrated Intelligence and Communication Systems (ICIICS), Kalaburagi, India, 22–23 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–8. [Google Scholar]
- Yin, X.; Zhu, Y.; Hu, J. A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–36. [Google Scholar] [CrossRef]
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE symposium on security and privacy (SP), San Francisco, CA, USA, 20–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 739–753. [Google Scholar]
- Xia, Q.; Tao, Z.; Hao, Z.; Li, Q. FABA: An algorithm for fast aggregation against byzantine attacks in distributed neural networks. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019. [Google Scholar]
- Gentry, C. Fully homomorphic encryption using ideal lattices. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, Bethesda, MD, USA, 31 May–2 June 2009; pp. 169–178. [Google Scholar]
- Trivedi, D. Privacy-Preserving Security Analytics, 2023. Available online: https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2023/privacy-preserving-security-analytics.
- Trivedi, D. The Future of Cryptography: Performing Computations on Encrypted Data. ISACA J. 2023, 1, 2. [Google Scholar]
- Angel, S.; Chen, H.; Laine, K.; Setty, S. PIR with compressed queries and amortized query processing. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–24 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 962–979. [Google Scholar]
- Bos, J.W.; Castryck, W.; Iliashenko, I.; Vercauteren, F. Privacy-friendly forecasting for the smart grid using homomorphic encryption and the group method of data handling. In Proceedings of the Progress in Cryptology-AFRICACRYPT 2017: 9th International Conference on Cryptology in Africa, Dakar, Senegal, 24–26 May 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 184–201. [Google Scholar]
- Boudguiga, A.; Stan, O.; Sedjelmaci, H.; Carpov, S. Homomorphic Encryption at Work for Private Analysis of Security Logs. In Proceedings of the ICISSP, Valletta, Malta, 25–27 February 2020; pp. 515–523. [Google Scholar]
- Bourse, F.; Minelli, M.; Minihold, M.; Paillier, P. Fast homomorphic evaluation of deep discretized neural networks. In Proceedings of the Advances in Cryptology–CRYPTO 2018: 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2018; Proceedings, Part III 38. Springer: Berlin/Heidelberg, Germany, 2018; pp. 483–512. [Google Scholar]
- Kim, M.; Lauter, K. Private genome analysis through homomorphic encryption. BMC Med. Inform. Decis. Mak. 2015, 15, S3. [Google Scholar] [CrossRef]
- Trama, D.; Clet, P.E.; Boudguiga, A.; Sirdey, R. Building Blocks for LSTM Homomorphic Evaluation with TFHE. In Proceedings of the International Symposium on Cyber Security, Cryptology, and Machine Learning, Be’er Sheva, Israel, 29–30 June 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 117–134. [Google Scholar]
- Trivedi, D.; Boudguiga, A.; Triandopoulos, N. SigML: Supervised Log Anomaly with Fully Homomorphic Encryption. In Proceedings of the International Symposium on Cyber Security, Cryptology, and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2023; pp. 372–388. [Google Scholar]
- Bendoukha, A.A.; Demirag, D.; Kaaniche, N.; Boudguiga, A.; Sirdey, R.; Gambs, S. Towards Privacy-preserving and Fairness-aware Federated Learning Framework. Proc. Priv. Enhancing Technol. 2025, 2025, 845–865. [Google Scholar] [CrossRef]
- Cheon, J.H.; Kim, A.; Kim, M.; Song, Y. Homomorphic encryption for arithmetic of approximate numbers. In Proceedings of the Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, 3–7 December 2017; Proceedings, Part I 23. Springer: Berlin/Heidelberg, Germany, 2017; pp. 409–437. [Google Scholar]
- Badawi, A.A.; Alexandru, A.; Bates, J.; Bergamaschi, F.; Cousins, D.B.; Erabelli, S.; Genise, N.; Halevi, S.; Hunt, H.; Kim, A.; et al. OpenFHE: Open-Source Fully Homomorphic Encryption Library. Cryptology ePrint Archive, Paper 2022/915. 2022. Available online: https://eprint.iacr.org/2022/915.
- Al Badawi, A.; Bates, J.; Bergamaschi, F.; Cousins, D.B.; Erabelli, S.; Genise, N.; Halevi, S.; Hunt, H.; Kim, A.; Lee, Y.; et al. OpenFHE: Open-Source Fully Homomorphic Encryption Library. In WAHC’22: Proceedings of the 10th Workshop on Encrypted Computing & Applied Homomorphic Cryptography; Association for Computing Machinery: New York, NY, USA, 2022; pp. 53–63. [Google Scholar] [CrossRef]
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In CCS ’17: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1175–1191. [Google Scholar] [CrossRef]
- Yan, D.; Hu, M.; Xie, X.; Yang, Y.; Chen, M. S2FL: Toward Efficient and Accurate Heterogeneous Split Federated Learning. IEEE Trans. Comput. 2026, 75, 320–334. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A survey on deep transfer learning. In Proceedings of the Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 4–7 October 2018; Proceedings, Part III 27. Springer: Berlin/Heidelberg, Germany, 2018; pp. 270–279. [Google Scholar]
- Ring, M.B. CHILD: A first step towards continual learning. Mach. Learn. 1997, 28, 77–104. [Google Scholar] [CrossRef]
- Yang, Q.; Ling, C.; Chai, X.; Pan, R. Test-cost sensitive classification on data with missing values. IEEE Trans. Knowl. Data Eng. 2006, 18, 626–638. [Google Scholar] [CrossRef]
- Zhu, X.; Wu, X. Class noise handling for effective cost-sensitive learning by cost-guided iterative classification filtering. IEEE Trans. Knowl. Data Eng. 2006, 18, 1435–1440. [Google Scholar]
- Thapa, C.; Arachchige, P.C.M.; Camtepe, S.; Sun, L. Splitfed: When federated learning meets split learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 8485–8493. [Google Scholar]
- Security Notes for Homomorphic Encryption—OpenFHE Documentation. 2022. Available online: https://openfhe-development.readthedocs.io/en/latest/sphinx_rsts/intro/security.html.
- Tramèr, F.; Zhang, F.; Juels, A.; Reiter, M.K.; Ristenpart, T. Stealing Machine Learning Models via Prediction APIs. In Proceedings of the USENIX Security Symposium, Austin, TX, USA, 10–12 August 2016; Volume 16, pp. 601–618. [Google Scholar]
- Juuti, M.; Szyller, S.; Marchal, S.; Asokan, N. PRADA: Protecting against DNN model stealing attacks. In Proceedings of the 2019 IEEE European Symposium on Security and Privacy (EuroS&P), Stockholm, Sweden, 17–19 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 512–527. [Google Scholar]
- Li, B.; Micciancio, D. On the security of homomorphic encryption on approximate numbers. In Proceedings of the Advances in Cryptology–EUROCRYPT 2021: 40th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, 17–21 October 2021; Proceedings, Part I 40. Springer: Berlin/Heidelberg, Germany, 2021; pp. 648–677. [Google Scholar]
- Openfhe-Development/src/pke/examples/CKKS_ NOISE_FLOODING.md at Main·Openfheorg/Openfhe-Development. 2022. Available online: https://github.com/openfheorg/openfhe-development/blob/main/src/pke/examples/CKKS_NOISE_FLOODING.md.
- Ogilvie, T. Differential Privacy for Free? Harnessing the Noise in Approximate Homomorphic Encryption. Cryptol. ePrint Arch. 2023. [Google Scholar] [CrossRef]
- He, S.; Zhu, J.; He, P.; Lyu, M.R. Loghub: A Large Collection of System Log Datasets towards Automated Log Analytics. arXiv 2020, arXiv:2008.06448. [Google Scholar] [CrossRef]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3–18. [Google Scholar]
- Nicolae, M.I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.2.0. arXiv 2018, arXiv:1807.01069. [Google Scholar] [CrossRef]
- LeCun, Y.; Cortes, C.; Burges, C.J.B. THE MNIST DATABASE of Handwritten Digits. Available online: https://yann.lecun.org/exdb/mnist/index.html.
- Fredrikson, M.; Jha, S.; Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1322–1333. [Google Scholar]
- Li, J.; Rakin, A.S.; Chen, X.; Yang, L.; He, Z.; Fan, D.; Chakrabarti, C. Model Extraction Attacks on Split Federated Learning. arXiv 2023, arXiv:2303.08581. [Google Scholar] [CrossRef]
- Jagielski, M.; Carlini, N.; Berthelot, D.; Kurakin, A.; Papernot, N. High accuracy and high fidelity extraction of neural networks. In Proceedings of the 29th USENIX Conference on Security Symposium, Boston, MA, USA, 12–14 August 2020; pp. 1345–1362. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Rakin, A.S.; He, Z.; Fan, D. Bit-flip attack: Crushing neural network with progressive bit search. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1211–1220. [Google Scholar]
- Jagielski, M.; Carlini, N.; Berthelot, D.; Kurakin, A.; Papernot, N. High-fidelity extraction of neural network models. arXiv 2019, arXiv:1909.01838. [Google Scholar]
- He, P.; Zhu, J.; Zheng, Z.; Lyu, M.R. Drain: An online log parsing approach with fixed depth tree. In Proceedings of the 2017 IEEE International Conference on Web Services (ICWS), Honolulu, HI, USA, 25–30 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 33–40. [Google Scholar]
- Foundation, P.S. Python 3.11, 2023. Available online: https://www.python.org/downloads/release/python-3110/.
- Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Mueller, A.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.; Grobler, J.; et al. API design for machine learning software: Experiences from the scikit-learn project. In Proceedings of the ECML PKDD Workshop: Languages for Data Mining and Machine Learning, Prague, Czech Republic, 23–27 September 2013; pp. 108–122. [Google Scholar]
- Zhou, Y.; Ni, T.; Lee, W.B.; Zhao, Q. A survey on backdoor threats in large language models (llms): Attacks, defenses, and evaluations. arXiv 2025, arXiv:2502.05224. [Google Scholar] [CrossRef]
- Kurian, K.; Holland, E.; Oesch, S. Attacks and defenses against llm fingerprinting. arXiv 2025, arXiv:2508.09021. [Google Scholar] [CrossRef]
- Trivedi, D. GitHub-devharsh/chiku: Polynomial function approximation library in Python. 2023. Available online: https://github.com/devharsh/chiku.
- Trivedi, D. Brief announcement: Efficient probabilistic approximations for sign and compare. In Proceedings of the International Symposium on Stabilizing, Safety, and Security of Distributed Systems, Jersey City, NJ, USA, 2–4 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 289–296. [Google Scholar]
- Trivedi, D. Towards Efficient Security Analytics. Ph.D. Thesis, Stevens Institute of Technology, Hoboken, NJ, USA, 2024. [Google Scholar]
- Trivedi, D.; Boudguiga, A.; Kaaniche, N.; Triandopoulos, N. SigML++: Supervised log anomaly with probabilistic polynomial approximation. Cryptography 2023, 7, 52. [Google Scholar] [CrossRef]
- Trivedi, D.; Malcolm, C.; Harrell, J.; Omisakin, H.; Addison, P. PETA: A Privacy-Enhanced Framework for Secure and Auditable Tax Analysis. J. Cybersecur. Digit. Forensics Jurisprud. 2025, 1, 81–94. [Google Scholar]
- Trivedi, D.; Boudguiga, A.; Kaaniche, N.; Triandopoulos, N. SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments. Preprints 2025. [Google Scholar] [CrossRef]
- Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. (Leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory (TOCT) 2014, 6, 1–36. [Google Scholar] [CrossRef]
- Gentry, C.; Sahai, A.; Waters, B. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based. In Proceedings of the Advances in Cryptology—CRYPTO 2013: 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, 18–22 August 2013; Proceedings, Part I. Springer: Berlin/Heidelberg, Germany, 2013; pp. 75–92. [Google Scholar]
- Fan, J.; Vercauteren, F. Somewhat Practical Fully Homomorphic Encryption. Cryptology ePrint Archive, Report 2012/144. 2012. Available online: https://eprint.iacr.org/2012/144.
- Chillotti, I.; Gama, N.; Georgieva, M.; Izabachene, M. Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds. In Proceedings of the Advances in Cryptology—ASIACRYPT 2016: 22nd International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, 4–8 December 2016; Proceedings, Part I 22. Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–33. [Google Scholar]
- Ducas, L.; Micciancio, D. FHEW: Bootstrapping homomorphic encryption in less than a second. In Proceedings of the Advances in Cryptology–EUROCRYPT 2015: 34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, 26–30 April 2015; Proceedings, Part I 34. Springer: Berlin/Heidelberg, Germany, 2015; pp. 617–640. [Google Scholar]
- Brakerski, Z. Fully Homomorphic Encryption Without Modulus Switching from Classical GapSVP. In Proceedings of the 32nd Annual Cryptology Conference on Advances in Cryptology—CRYPTO 2012, Santa Barbara, CA, USA, 19–23 August 2012; Springer: New York, NY, USA, 2012; Volume 7417, pp. 868–886. [Google Scholar] [CrossRef]
- Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. Fully Homomorphic Encryption without Bootstrapping. Cryptology ePrint Archive, Paper 2011/277. 2011. Available online: https://eprint.iacr.org/2011/277.
- Dwork, C. Differential privacy. In Proceedings of the International Colloquium on Automata, Languages, and Programming; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–12. [Google Scholar]
- Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
- Li, N.; Qardaji, W.; Su, D.; Wu, Y.; Yang, W. Membership privacy: A unifying framework for privacy definitions. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, Berlin, Germany, 4–8 November 2013; pp. 889–900. [Google Scholar]
- Vinterbo, S.A. Differentially private projected histograms: Construction and use for prediction. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer: Berlin/Heidelberg, Germany, 2012; pp. 19–34. [Google Scholar]
- Chaudhuri, K.; Monteleoni, C. Privacy-preserving logistic regression. Adv. Neural Inf. Process. Syst. 2008, 21, 1–10. [Google Scholar]
- Zhang, J.; Zhang, Z.; Xiao, X.; Yang, Y.; Winslett, M. Functional mechanism: Regression analysis under differential privacy. arXiv 2012, arXiv:1208.0219. [Google Scholar] [CrossRef]
- Rubinstein, B.I.; Bartlett, P.L.; Huang, L.; Taft, N. Learning in a large function space: Privacy-preserving mechanisms for SVM learning. arXiv 2009, arXiv:0911.5708. [Google Scholar] [CrossRef]
- Jagannathan, G.; Pillaipakkamnatt, K.; Wright, R.N. A practical differentially private random decision tree classifier. In Proceedings of the 2009 IEEE International Conference on Data Mining Workshops, Miami, FL, USA, 6 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 114–121. [Google Scholar]
- Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1310–1321. [Google Scholar]
- Pustozerova, A.; Mayer, R. Information leaks in federated learning. In Proceedings of the Network and Distributed System Security Symposium, San Diego, CA, USA, 23–26 February 2020; Volume 10, p. 122. [Google Scholar]
- Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 2019, 32, 1–11. [Google Scholar]
- McMahan, H.B.; Ramage, D.; Talwar, K.; Zhang, L. Learning differentially private language models without losing accuracy. CoRR abs/1710.06963 (2017). arXiv 2017, arXiv:1710.06963. [Google Scholar]
- Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
- Papernot, N.; Song, S.; Mironov, I.; Raghunathan, A.; Talwar, K.; Erlingsson, Ú. Scalable private learning with pate. arXiv 2018, arXiv:1802.08908. [Google Scholar] [CrossRef]
- Sabater, C.; Bellet, A.; Ramon, J. Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties. In Proceedings of the NeurIPS 2020 Workshop on Privacy Preserving Machine Learning-PriML and PPML Joint Edition, Virtual, 11 December 2020. [Google Scholar]
- Grivet Sébert, A.; Pinot, R.; Zuber, M.; Gouy-Pailler, C.; Sirdey, R. SPEED: Secure, PrivatE, and efficient deep learning. Mach. Learn. 2021, 110, 675–694. [Google Scholar] [CrossRef]
- Dong, X.; Yin, H.; Alvarez, J.M.; Kautz, J.; Molchanov, P.; Kung, H. Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks. arXiv 2021, arXiv:2107.06304. [Google Scholar]
- Titcombe, T.; Hall, A.J.; Papadopoulos, P.; Romanini, D. Practical defences against model inversion attacks for split neural networks. arXiv 2021, arXiv:2104.05743. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Truex, S.; Liu, L.; Gursoy, M.E.; Yu, L.; Wei, W. Demystifying membership inference attacks in machine learning as a service. IEEE Trans. Serv. Comput. 2019, 14, 2073–2089. [Google Scholar] [CrossRef]
- Wang, Z.; Song, M.; Zhang, Z.; Song, Y.; Wang, Q.; Qi, H. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2512–2520. [Google Scholar]
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; PMLR: London, UK, 2020; pp. 2938–2948. [Google Scholar]
- Hou, B.; Gao, J.; Guo, X.; Baker, T.; Zhang, Y.; Wen, Y.; Liu, Z. Mitigating the backdoor attack by federated filters for industrial IoT applications. IEEE Trans. Ind. Inform. 2021, 18, 3562–3571. [Google Scholar] [CrossRef]
- Fang, M.; Cao, X.; Jia, J.; Gong, N.Z. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium, Boston, MA, USA, 12–14 August 2020; pp. 1623–1640. [Google Scholar]
- Diao, E.; Ding, J.; Tarokh, V. Heterofl: Computation and communication efficient federated learning for heterogeneous clients. arXiv 2020, arXiv:2010.01264. [Google Scholar]
- Xu, Z.; Yu, F.; Xiong, J.; Chen, X. Helios: Heterogeneity-aware federated learning with dynamically balanced collaboration. In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 997–1002. [Google Scholar]
- Vepakomma, P.; Gupta, O.; Swedish, T.; Raskar, R. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv 2018, arXiv:1812.00564. [Google Scholar] [CrossRef]
- Chen, S.; Jia, R.; Qi, G.J. Improved Techniques for Model Inversion Attacks. 2020. Available online: https://openreview.net/forum?id=unRf7cz1o1.
- Zhang, Y.; Jia, R.; Pei, H.; Wang, W.; Li, B.; Song, D. The secret revealer: Generative model-inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 253–261. [Google Scholar]
- Wu, M.; Zhang, X.; Ding, J.; Nguyen, H.; Yu, R.; Pan, M.; Wong, S.T. Evaluation of inference attack models for deep learning on medical data. arXiv 2020, arXiv:2011.00177. [Google Scholar] [CrossRef]
- He, Z.; Zhang, T.; Lee, R.B. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; pp. 148–162. [Google Scholar]
- Douceur, J.R. The sybil attack. In Proceedings of the Peer-to-Peer Systems: First International Workshop, IPTPS 2002, Cambridge, MA, USA, 7–8 March 2002; Revised Papers 1. Springer: Berlin/Heidelberg, Germany, 2002; pp. 251–260. [Google Scholar]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Pasquini, D.; Ateniese, G.; Bernaschi, M. Unleashing the tiger: Inference attacks on split learning. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual, 15–19 November 2021; pp. 2113–2129. [Google Scholar]
- Hitaj, B.; Ateniese, G.; Perez-Cruz, F. Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 603–618. [Google Scholar]
- Erdoğan, E.; Küpçü, A.; Çiçek, A.E. Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning. In Proceedings of the 21st Workshop on Privacy in the Electronic Society, Los Angeles, CA, USA, 7 November 2022; pp. 115–124. [Google Scholar]
- Abedi, A.; Khan, S.S. Fedsl: Federated split learning on distributed sequential data in recurrent neural networks. arXiv 2020, arXiv:2011.03180. [Google Scholar] [CrossRef]
- Yin, D.; Chen, Y.; Kannan, R.; Bartlett, P. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; PMLR: Cambridge MA, USA, 2018; pp. 5650–5659. [Google Scholar]
- Blanchard, P.; El Mhamdi, E.M.; Guerraoui, R.; Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
- Ei Mhamdi, E.M.; Guerraoui, R.; Rouault, S. The hidden vulnerability of distributed learning in byzantium. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; PMLR: Cambridge MA, USA, 2018; pp. 3521–3530. [Google Scholar]
- Shen, S.; Tople, S.; Saxena, P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications, Los Angeles, CA, USA, 5–9 December 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 508–519. [Google Scholar]
- Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-iid data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
- Wang, H.; Kaplan, Z.; Niu, D.; Li, B. Optimizing federated learning on non-iid data with reinforcement learning. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 1698–1707. [Google Scholar]
- Fung, C.; Yoon, C.J.; Beschastnikh, I. The Limitations of Federated Learning in Sybil Settings. In Proceedings of the RAID, Virtual, 14–16 October 2020; pp. 301–316. [Google Scholar]
- Cao, X.; Fang, M.; Liu, J.; Gong, N.Z. Fltrust: Byzantine-robust federated learning via trust bootstrapping. arXiv 2020, arXiv:2012.13995. [Google Scholar]
- Melis, L.; Song, C.; De Cristofaro, E.; Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 691–706. [Google Scholar]
- Goddard, M. The EU General Data Protection Regulation (GDPR): European regulation that has a global impact. Int. J. Mark. Res. 2017, 59, 703–705. [Google Scholar] [CrossRef]
- Liu, X.; Li, H.; Xu, G.; Chen, Z.; Huang, X.; Lu, R. Privacy-enhanced federated learning against poisoning adversaries. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4574–4588. [Google Scholar] [CrossRef]
- Ma, Z.; Ma, J.; Miao, Y.; Li, Y.; Deng, R.H. ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Trans. Inf. Forensics Secur. 2022, 17, 1639–1654. [Google Scholar] [CrossRef]
- Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.; Poor, H.V. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
- Hao, M.; Li, H.; Luo, X.; Xu, G.; Yang, H.; Liu, S. Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Trans. Ind. Inform. 2019, 16, 6532–6542. [Google Scholar] [CrossRef]
- Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1333–1345. [Google Scholar] [CrossRef]
- Abuadbba, S.; Kim, K.; Kim, M.; Thapa, C.; Camtepe, S.A.; Gao, Y.; Kim, H.; Nepal, S. Can we use split learning on 1d cnn models for privacy preserving training? In Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, Melbourne, Australia, 10–14 July 2020; pp. 305–318. [Google Scholar]
- Dwork, C. Differential privacy: A survey of results. In Proceedings of the Theory and Applications of Models of Computation: 5th International Conference, TAMC 2008, Xi’an, China, 25–29 April 2008; Proceedings 5. Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–19. [Google Scholar]
- Mireshghallah, F.; Taram, M.; Ramrakhyani, P.; Jalali, A.; Tullsen, D.; Esmaeilzadeh, H. Shredder: Learning noise distributions to protect inference privacy. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, 16–20 March 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 3–18. [Google Scholar]
- Vepakomma, P.; Gupta, O.; Dubey, A.; Raskar, R. Reducing Leakage in Distributed Deep Learning for Sensitive Health Data. 2019. Available online: https://aiforsocialgood.github.io/iclr2019/accepted/track1/pdfs/29_aisg_iclr2019.pdf.
- Li, J.; Rakin, A.S.; Chen, X.; He, Z.; Fan, D.; Chakrabarti, C. Ressfl: A resistance transfer framework for defending model inversion attack in split federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10194–10202. [Google Scholar]
- Vepakomma, P.; Singh, A.; Gupta, O.; Raskar, R. NoPeek: Information leakage reduction to share activations in distributed deep learning. In Proceedings of the 2020 International Conference on Data Mining Workshops (ICDMW), Sorrento, Italy, 17–20 November 2020; pp. 933–942. [Google Scholar]
- Papernot, N.; Abadi, M.; Erlingsson, U.; Goodfellow, I.; Talwar, K. Semi-supervised knowledge transfer for deep learning from private training data. arXiv 2016, arXiv:1610.05755. [Google Scholar]
- Zhang, J.; Gu, Z.; Jang, J.; Wu, H.; Stoecklin, M.P.; Huang, H.; Molloy, I. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, Incheon, Republic of Korea, 4–8 June 2018; pp. 159–172. [Google Scholar]
- Nagai, Y.; Uchida, Y.; Sakazawa, S.; Satoh, S. Digital watermarking for deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 3–16. [Google Scholar] [CrossRef]
- Jagielski, M.; Oprea, A.; Biggio, B.; Liu, C.; Nita-Rotaru, C.; Li, B. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–24 May 2018; pp. 19–35. [Google Scholar]
- Jia, H.; Choquette-Choo, C.A.; Chandrasekaran, V.; Papernot, N. Entangled Watermarks as a Defense against Model Extraction. In Proceedings of the USENIX Security Symposium, Online, 11–13 August 2021; pp. 1937–1954. [Google Scholar]






| Method | Computation | Communication |
|---|---|---|
| FL [3] | ||
| SL [4] | ||
| SFL [30] | ||
| Type | S1 | S2 | S3 | FL | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| M1 | M2 | M3 | M1 | M2 | M3 | M1 | M2 | M3 | M1 | M2 | M3 | |
| TA | 0.9655 | 0.9583 | 1.0000 | 0.9655 | 0.8404 | 1.0000 | 0.9655 | 0.5345 | 1.0000 | 0.9655 | 0.5345 | 0.9994 |
| TL | 0.1465 | 0.1441 | 0.0000 | 0.1501 | 0.5115 | 0.0000 | 0.1501 | 0.6908 | 0.0000 | 0.1553 | 0.6910 | 0.0151 |
| VA | 0.9664 | 0.9590 | 1.0000 | 0.9664 | 0.8980 | 1.0000 | 0.9664 | 0.5338 | 1.0000 | 0.9664 | 0.5338 | 1.0000 |
| VL | 0.1433 | 0.1438 | 0.0000 | 0.1473 | 0.4348 | 0.0000 | 0.1472 | 0.6910 | 0.0000 | 0.1472 | 0.6909 | 0.0008 |
| Arch | Training Loss | Validation Accuracy | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| M1 | M2 | M3 | M4 | M5 | M1 | M2 | M3 | M4 | M5 | |
| S1 | 0.1352 | 0.1564 | 0.2390 | 0.0000 | 0.0000 | 0.9691 | 0.9623 | 0.9163 | 1.0000 | 1.0000 |
| S2 | 0.1378 | 0.1568 | 5241.3701 | 0.0000 | 0.0000 | 0.9691 | 0.9623 | 0.5658 | 1.0000 | 1.0000 |
| S3 | 0.1378 | 0.1568 | 0.6839 | 0.0000 | 0.0000 | 0.9691 | 0.9623 | 0.5658 | 1.0000 | 1.0000 |
| FL | 8.48 × 1015 | 1.03 × 1016 | 1.19 × 1017 | 3.20 × 1017 | 3.43 × 1017 | 0.9691 | 0.9623 | 0.5658 | 0.0000 | 0.0000 |
| Shadows | Architecture | Median | Minimum | Maximum |
|---|---|---|---|---|
| Full model | 0.4374 | 0.4302 | 0.6816 | |
| 1 | Top layers | 0.5072 | 0.5006 | 0.5100 |
| Bottom layers | 0.3970 | 0.3046 | 0.7036 | |
| Full model | 0.5464 | 0.1764 | 0.7552 | |
| 2 | Top layers | 0.5064 | 0.4186 | 0.6028 |
| Bottom layers | 0.5072 | 0.3994 | 0.7042 | |
| Full model | 0.6306 | 0.3482 | 0.8566 | |
| 3 | Top layers | 0.5170 | 0.4154 | 0.6088 |
| Bottom layers | 0.4958 | 0.3030 | 0.6030 | |
| Full model | 0.6750 | 0.4644 | 0.8170 | |
| 4 | Top layers | 0.4994 | 0.4080 | 0.5838 |
| Bottom layers | 0.4778 | 0.3004 | 0.6784 | |
| Full model | 0.7130 | 0.4420 | 0.8260 | |
| 5 | Top layers | 0.5015 | 0.4542 | 0.6166 |
| Bottom layers | 0.4881 | 0.2892 | 0.6196 | |
| Full model | 0.7610 | 0.4730 | 0.8984 | |
| 6 | Top layers | 0.5058 | 0.4352 | 0.5674 |
| Bottom layers | 0.3934 | 0.3792 | 0.5972 | |
| Full model | 0.7572 | 0.7324 | 0.8428 | |
| 7 | Top layers | 0.4766 | 0.4566 | 0.5402 |
| Bottom layers | 0.5054 | 0.5016 | 0.5856 | |
| Full model | 0.7546 | 0.5564 | 0.7862 | |
| 8 | Top layers | 0.5242 | 0.3714 | 0.5594 |
| Bottom layers | 0.4052 | 0.3768 | 0.8056 | |
| Full model | 0.8356 | 0.8176 | 0.8530 | |
| 9 | Top layers | 0.4482 | 0.4374 | 0.5168 |
| Bottom layers | 0.5008 | 0.3904 | 0.6004 | |
| Full model | 0.7886 | 0.5324 | 0.8270 | |
| 10 | Top layers | 0.5206 | 0.4376 | 0.5776 |
| Bottom layers | 0.4020 | 0.1910 | 0.8900 |
| Type | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|
| Full (100%) | 0.4999 | 0.4999 | 1.0000 | 0.6666 |
| Test (20%) | 0.5016 | 0.5016 | 1.0000 | 0.6681 |
| Type | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|
| Model-1 | 0.8770 | 0.8749 | 1.0000 | 0.9333 |
| Model-2 | 0.7366 | 0.9319 | 0.7485 | 0.8302 |
| Model-3 | 0.8604 | 0.8604 | 1.0000 | 0.9249 |
| TL | 0.7366 | 0.9319 | 0.7485 | 0.8302 |
| TP | 0.8877 | 0.8845 | 1.0000 | 0.9387 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Trivedi, D.; Boudguiga, A.; Kaaniche, N.; Triandopoulos, N. SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments. Electronics 2026, 15, 267. https://doi.org/10.3390/electronics15020267
Trivedi D, Boudguiga A, Kaaniche N, Triandopoulos N. SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments. Electronics. 2026; 15(2):267. https://doi.org/10.3390/electronics15020267
Chicago/Turabian StyleTrivedi, Devharsh, Aymen Boudguiga, Nesrine Kaaniche, and Nikos Triandopoulos. 2026. "SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments" Electronics 15, no. 2: 267. https://doi.org/10.3390/electronics15020267
APA StyleTrivedi, D., Boudguiga, A., Kaaniche, N., & Triandopoulos, N. (2026). SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments. Electronics, 15(2), 267. https://doi.org/10.3390/electronics15020267

