Another advantage of XAI is that it can help organizations comply with laws and regulations that require transparency and explainability in AI systems. Within the General Data Protection Regulation (GDPR) of the European Union, transparency is a fundamental principle for data processing [
15]. In practice, the complexity of AI algorithms makes it difficult to adhere fully to this principle. Felzmann et al. [
16] proposes that transparency as required by the GDPR in itself may be insufficient to achieve the positive goals associated with transparency, such as an increase in trust. Instead, they propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. The EU is currently working on the Artificial Intelligence Act [
17] which makes a distinction between non-high-risk and high-risk AI systems. On non-high-risk systems only limited transparency obligations are imposed, while for high-risk systems many restrictions are imposed on quality, documentation, traceability, transparency, human oversight, accuracy and robustness. Bell et al. [
18] states that transparency is left to the technologists to achieve and propose a stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems, which is a useful initiative. Besides GDPR, there are other privacy laws for which XAI might be an interesting development. In the USA there is the Health Insurance Portability and Accountability Act (HIPAA) privacy rule [
19], which is related to the Openness and Transparency Principle in the Privacy and Security Framework. This Openness and Transparency Principle stresses that it is “important for people to understand what individually identifiable health information exists about them, how that information is collected, used, and disclosed, and how reasonable choices can be exercised with respect to that information” [
20]. The transparency of the usage of health information might point to a need for explainability of algorithms here. In China, article 7 of the Personal Information Protective Law (PIPL) prescribes that “the principles of openness and transparency shall be observed in the handling of personal information, disclosing the rules for handling personal information and clearly indicating the purpose, method, and scope of handling” [
21], which also points to a need for transparency in data handling and AI algorithms.