Preprint Review Version 1 This version is not peer-reviewed

Addressing Bias and Fairness using Fair Federated Learning: A Systematic Literature Review

Version 1 : Received: 13 October 2024 / Approved: 14 October 2024 / Online: 15 October 2024 (03:55:27 CEST)

How to cite: Kim, D.; Woo, H.; Lee, Y. Addressing Bias and Fairness using Fair Federated Learning: A Systematic Literature Review. Preprints 2024, 2024101060. https://doi.org/10.20944/preprints202410.1060.v1 Kim, D.; Woo, H.; Lee, Y. Addressing Bias and Fairness using Fair Federated Learning: A Systematic Literature Review. Preprints 2024, 2024101060. https://doi.org/10.20944/preprints202410.1060.v1

Abstract

In the field of machine learning, the rapid development of data volume and variety requires ethical data utilization and strict privacy protection standards. Fair Federated Learning (FFL) has emerged as a key solution that aims to ensure fairness and privacy protection in a distributed learning environment. FFL enhances privacy protection and solves the inherent limitations of existing federated learning (FL) by promoting fair model training in diverse participant groups, preventing the exclusion of individual users or minorities, and improving overall model fairness. In this study, FFL discusses the causes of bias and fairness of existing FL, and separates solutions based on data partitioning strategies, privacy mechanisms, applicable machine learning models, communication architectures, and technologies to overcome heterogeneity. In order to improve the causes of bias, fairness, and privacy protection of FL, fairness evaluation indicators and applications and challenges of FFL are discussed. Since it addresses bias, fairness, and privacy issues in FL of all mechanisms, it can be an important resource for practitioners who want to implement efficient FL solutions.

Keywords

Distributed computing methodologies; Fair federated learning; Bias; Fairness; Privacy preservation

Subject

Computer Science and Mathematics, Computer Networks and Communications

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.