Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Quantifying the Cross-Sectoral Intersecting Discrepancies Within Multiple Groups Using Latent Class Analysis Towards Fairness

Version 1 : Received: 8 June 2024 / Approved: 11 June 2024 / Online: 11 June 2024 (12:38:23 CEST)
Version 2 : Received: 11 June 2024 / Approved: 12 June 2024 / Online: 12 June 2024 (11:18:25 CEST)

How to cite: Yuan, Y.; Chen, K.; Rizvi, M.; Baillie, L.; Pang, W. Quantifying the Cross-Sectoral Intersecting Discrepancies Within Multiple Groups Using Latent Class Analysis Towards Fairness. Preprints 2024, 2024060700. https://doi.org/10.20944/preprints202406.0700.v2 Yuan, Y.; Chen, K.; Rizvi, M.; Baillie, L.; Pang, W. Quantifying the Cross-Sectoral Intersecting Discrepancies Within Multiple Groups Using Latent Class Analysis Towards Fairness. Preprints 2024, 2024060700. https://doi.org/10.20944/preprints202406.0700.v2

Abstract

The growing interest in fair AI development is evident. The ''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities, emphasising the significance of fairness in AI. This is particularly relevant as an increasing number of AI tools are applied to decision-making processes, such as resource allocation and service scheme development, across various sectors such as health, energy, and housing. Therefore, exploring joint inequalities in these sectors is significant and valuable for thoroughly understanding overall inequality and unfairness. This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies among user-defined groups using latent class analysis. These discrepancies can be used to approximate inequality and provide valuable insights to fairness issues. We validate our approach using both proprietary and public datasets, including EVENS and Census 2021 (England \& Wales) datasets, to examine cross-sectoral intersecting discrepancies among different ethnic groups. We also verify the reliability of the quantified discrepancy by conducting a correlation analysis with a government public metric. Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications. Additionally, we demonstrate how the proposed approach can be used to provide insights into the fairness of machine learning.

Keywords

Machine Learning; AI for Social Science; AI bias

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.