Adversarial Robustness and Privacy Preservation in Federated Learning for Healthcare Applications
Author(s):Kiran Reddy Nallapati, Suresh Kumar Venkataraman, Deepa Chandrasekhar, Ahmed Farouk Siddiqui, Meenakshi Sundaram
Affiliation: Department of Data Science, Al-Falah University, Faridabad, Haryana, India
Page No: 31-34
Volume issue & Publishing Year: Volume 3, Issue 3, 2026/03/07
Journal: International Journal of Advanced Engineering Application (IJAEA)
ISSN NO: 3048-6807
DOI: https://doi.org/10.5281/zenodo.19344958
Abstract:
Federated Learning (FL) enables collaborative machine learning across distributed data silos — such as hospital networks — without centralising sensitive patient data, addressing fundamental privacy and regulatory compliance barriers to large-scale healthcare AI model training. However, the distributed and asynchronous nature of FL introduces novel security vulnerabilities absent from centralised training: Byzantine clients can corrupt the global model through gradient manipulation (model poisoning), label flipping, or backdoor injection attacks, while the gradient updates themselves may leak private patient information through gradient inversion attacks. Simultaneously, differentially private FL mechanisms that protect against gradient inversion introduce accuracy degradation, creating a privacy-utility tradeoff that must be characterised for clinical deployment decisions.
This paper presents a comprehensive empirical evaluation of four Byzantine-resilient aggregation strategies (FedAvg baseline, Krum, Trimmed Mean, and Flame) against four attack types (label flipping, model poisoning, backdoor injection, and free-riding) using a simulated federated network of five hospital clients with combined electronic health record datasets comprising 57,918 patient records for disease prediction, under both IID and non-IID data distribution scenarios. We additionally evaluate Differential Privacy SGD (DP-SGD) across six privacy budget levels (ε ∈ {0.5, 1, 2, 3, 5, 8}) and characterise the privacy-utility tradeoff. Flame achieves the strongest overall defence with attack success rates below 8% for label flipping and model poisoning, while the recommended ε range of 2–3 preserves F1 scores of 0.791–0.812 (IID) and 0.714–0.741 (non-IID) under rigorous privacy guarantees.
Keywords: federated learning, Byzantine resilience, differential privacy, model poisoning, backdoor attack, healthcare AI, gradient privacy, Krum, Trimmed Mean, Flame, DP-SGD, non-IID, privacy-utility tradeoff, electronic health records
Reference:
- [1] Abadi, M., Chu, A., Goodfellow, I., et al. (2016). Deep learning with differential privacy. Proceedings of ACM CCS 2016, 308–318.
- [2] Bagdasaryan, E., Veit, A., Hua, Y., et al. (2020). How to backdoor federated learning. Proceedings of AISTATS 2020, 2938–2948.
- [3] Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Machine learning with adversaries. Advances in Neural Information Processing Systems, 30.
- [4] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4), 211–407.
- [5] Fung, C., Yoon, C. J. M., & Beschastnikh, I. (2020). The limitations of federated learning in Sybil settings. USENIX RAID 2020.
- [6] Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R. F. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311.
- [7] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3), 50–60.
- [8] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks. AISTATS 2017.
- [9] Pillutla, K., Kakade, S. M., & Harchaoui, Z. (2022). Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70, 1142–1154.
- [10] Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. Proceedings of ACM CCS 2015, 1310–1321.
- [11] Yin, D., Chen, Y., Kannan, R., & Bartlett, P. (2018). Byzantine-robust distributed learning. Proceedings of ICML 2018, 5650–5659.
