Preprint
Article

Exploring the Potential of Artificial Intelligence in Primary Care: Insights From Stakeholders’ Perspectives

Altmetrics

Downloads

206

Views

81

Comments

0

Submitted:

14 November 2023

Posted:

15 November 2023

You are already at the latest version

Alerts
Abstract
Artificial intelligence (AI) has grown in healthcare in recent years. The UK government recognises AI’s potential to enhance NHS services, yet research on AI in primary care (PC) has received limited attention. AI acceptance presents unique challenges in PC, characterised by fragmented structures, heterogeneous data sources, and multiple government departments. The organisational levels within PC are categorised as macro, meso, and micro levels. Existing research has predominantly focused on micro-level stakeholders. Our online survey addressed this research gap by encom-passing stakeholder perspectives at all levels. The results demonstrate the critical role of me-so-level stakeholders in facilitating AI acceptance. Importantly, a lack of understanding of AI terminology and concepts, concerns over potential job displacement, and the importance of em-pathy in patient care are highlighted as key challenges. Stakeholders also express the need for standardised AI terminology, comprehensive training, and regulatory standards to ensure equi-table and effective AI utilisation. This study lays the foundation for future in-depth interviews and further exploration of AI's role in PC. Observations in secondary care indicate that practitioners have substantial concerns about AI, how it works, and its limitations. Explainable AI can help technologists address such concerns, but first, we need to understand primary care's information needs.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Science

1. Introduction

In recent years, Artificial Intelligence (AI) deployment in the healthcare sector has witnessed a substantial surge, albeit with a predominant emphasis on secondary care [1,2,3,4,5,6,7]. Research in primary care (PC) is in the early stages internationally [8] and has been largely disregarded in the United Kingdom (UK) [9]. This lack of acceptance of AI technology contradicts the long term plan of the National Health Service (NHS) that puts PC digitalisation as a top priority [10]. Notably, the UK government has articulated the potential of AI to enhance the quality and quantity of healthcare services across the NHS [11]. There is, however, considerable unexplored potential for AI in PC settings in the UK, within diagnostics and systems to improve efficiency [12]. Given the structural and organisational differences between secondary and PC, it is crucial to understand the factors that may influence AI technology acceptance in PC [13,14].
Panch et al. describe the NHS system as fragmented, attributing this fragmentation to the vast amounts of data scattered across numerous locations, which acts as a key barrier to AI integration [15]. This issue was also highlighted by the UK government in 2018 when the House of Lords Select Committee released a report investigating NHS data and its ownership, ultimately leading to the enactment of new legislation designed to protect all UK data, the National Data Strategy [16,17]. Through the National Data Strategy, the government aims to improve the quality of data, enabling the efficient use of data to unlock its value and potential for growth [17]. It is possible that the structure of PC, which is a sector of private businesses operating under one main contract with the NHS, may negatively impact the quality of the data available and potentially influence the acceptance of AI. Currently, the NHS consists of four distinct healthcare systems run by the four governments of the UK [18]. According to NHS Digital, there were 6,376 PC practices in England in 2022 [19], each operating independently. The management of each practice decides on the Electronic Health Record (EHR) that is used, the additional software systems to link to the EHR, the number and type of staff required for the practice, making integration and resource sharing across PC difficult.
Asthana et al. describe the stakeholder levels of the NHS and PC as macro, meso and micro levels [20]. The UK government have established departments dedicated to governing, assisting with implementation, and conducting research for AI within healthcare, such as the AI strategy and the NHS AI Lab [11,21]. These departments serve as critical players at the macro level, primarily comprised of NHS policymakers, seeking to optimise technology utilisation [11]. Meso-level stakeholders include PC managers, technology providers, and secondary care health boards. So far, the research focus has predominantly delved into the perspectives of technology providers [22]. The micro level, on the other hand, consists of individual stakeholders, so perspectives may vary, reflecting differing expectations and opinions regarding AI. Micro level stakeholders consist of all users of PC, such as doctors, nurses, administrators, and patients. Stakeholders within PC have access only to data that is required for their specific job role. Consequently, the complex organisational structure of the NHS and its multiple levels act as significant barriers to integrating technology into the healthcare system [20,23]. The following section explores the factors highlighted in the literature as barriers to AI acceptance.

1.1. Barriers to AI Acceptance within PC

When it comes to the acceptance of AI within PC, there have been numerous studies looking at the perspectives of doctors and micro level stakeholders [24,25,26,27]. However, meso level stakeholders have been disregarded, while macro level stakeholders are generally in favour of the integration of AI into PC [11,21]. Managers within PC are the key decision makers when it comes to new technologies being introduced. They are responsible for the general running of the business and for the dissemination of information from macro level to all micro level stakeholders. A major part of a manager’s role involves supplying information to macro level through audits and record keeping enabling payments for works completed. A study conducted by Kolbjørnsrud et al. investigated managers perceptions of AI across various sectors of society [28]. The findings from the study revealed that macro level managers would accept and integrate AI into their job roles, whereas meso level managers expressed concerns that AI would replace their job functions. Even though this study was for areas outside of healthcare the perceptions of managers were similar to the findings of our online survey. Another study by Ferreira, Ruivo and Reis emphasised the significance of understanding perspectives from meso level stakeholders, even though their viewpoints are not always presented in the literature [29]. They specifically looked at how machine learning could bring value to the data being held by businesses. The study found that data scientists and managers have differing viewpoints on the value of data, therefore, explaining how machine learning could create value may encourage acceptance and trust.
Morrison et al. conducted a series of 12 interviews to gather insights from macro level stakeholders regarding the barriers to AI adoption within the NHS [14]. Regulatory constraints, inadequate training, cost concerns, and unsatisfactory IT infrastructure are some of the obstacles to AI adoption identified by the study, as well as a fundamental lack of understanding of AI and its associated terminology. To address these issues, it was suggested that a standardised terminology should be used by the NHS to describe AI and training should be provided for all users. Regulatory compliance within the NHS is complex, necessitating adherence to general regulations such as General Data Protection Regulations (GDPR), Privacy and Electronic Communications (EC Directive) Regulations 2003 and the Public Records Act 1958 [30,31,32]. In addition to these, specific NHS regulations also need to be followed such as NHS Act 2006, Health and Social Care Act 2012 and Confidentiality: NHS Code of Practice 2003 [33,34,35]. Regulatory standards were also recommended by Morrison et al. as a way of ensuring standardisation and the effective deployment of systems where they would be most beneficial and where they can make the most impact [14]. In a separate study by Ganapathi and Duggal, doctors working with AI systems identified regulations and infrastructure as key barriers to AI implementation within the NHS [36]. The NHS AI lab is already addressing these areas in collaboration with technology companies, academics, and various departments within the UK government [21]. Facilitating a platform of collaboration, the NHS AI Lab empowers AI developers to engage in mutually beneficial cooperation, exchange best practices and stay abreast of pertinent guidance and regulations essential for the seamless integration of AI systems within the NHS. Moreover, the NHS AI Lab emphasises the necessity for tailored AI training modules designed to meet the diverse requirements of the stakeholders. This entails delineating strategies for the planning, development, and delivery of targeted AI training packages [37].
While in a study by Darcel et al. the regulation of AI was discussed as a factor affecting stakeholders trust in AI, along with implementation requirements, possible bias or inequity and acceptance by stakeholders [23]. The study found that implementation requirements could be a major problem area as each individual business may have differing infrastructure or data requirements. The inconsistency of the data across the PC sites could also create inequity and increase bias, as highlighted by Leslie et al. [38], affecting the acceptance of AI by stakeholders.
The Data Ethics Framework sets out guidance on the transparency, fairness, and accountability of AI, to enable stakeholders to assess the ethical considerations of systems, while ensuring responsible and appropriate use of patient data [39]. Within this framework, the concept of explainability is discussed to enhance the transparency of AI algorithms. Von Eschenbach posits a direct correlation between the opaqueness of black-box AI models, with the lack of trust for AI technologies, emphasising that eXplainable AI (XAI) can increase the transparency of AI systems [40]. Markus and colleagues further expound on the notion of XAI as a mechanism for developing trustworthy AI for the healthcare sector [41]. Amann et al. explored explainability within AI and concluded that fostering explainability ensures that patients remain central to healthcare delivery. [42].
The current patient care process adopted by healthcare professionals is heavily focused on person-centered care and communication. Person-centered care considers all factors associated with a patient, enabling the tailoring of individualised decisions on the required care. As articulated by Coulter and Oldham, the importance of understanding the individuals’ characteristics when treating a patient is emphasised, echoing Hippocrates’ wisdom that “it is more important to know what sort of person has a disease than to know what sort of disease a person has”. The key characteristics associated with patient-centered care are empathy, compassion, and trust [43]. When assessing the needs of a patient the conversation with the patient forms the foundation of person-centered care and fosters empathy [44]. Empathy, as categorised by Montemayor et al., comprises emotional, cognitive, and motivational components [45]. Emotional and motivational empathy stem from experiencing emotions that motivate us to show concern, while cognitive empathy involves detecting or recognising someone’s emotional state, which is a quality that could potentially be introduced to AI. In person-centered care, all three types of empathy are employed, providing a holistic approach. However, person-centered care, trust and empathy have been cited in the literature as potential barriers to the acceptance of AI in PC [46]. The pursuit of efficiency in healthcare through the employment of AI has, in some cases, shifted the focus of the NHS away from empathy and patient care, potentially resulting in their neglect [43]. Patient-centered care relies on the ability to explain the diagnosis, give justification for any advice, and offer alternative treatments if necessary [47]. Alam and Mueller studied XAI explanations in the context of patient-centered care, comparing local and global explanations, given at different time points within the scenarios. They concluded that an XAI system would need to provide both local and global explanations at varying time points, based on the changing information provided by the patient. The format of the explanation was also important for the understanding and trust of the system. The findings mirrored the adaptive process used by doctors employing the patient-centered care method [48].

1.2. Research Aims

This research aims to gain a better understanding of the different levels of stakeholders and their unique requirements for promoting acceptance and trust in AI within the context of PC. The existing literature indicates that there are numerous factors contributing to the lack of trust and acceptance of AI within PC. By gathering more perspectives from stakeholders across all organisational levels of PC, this research intends to explore whether there are differing requirements that could influence acceptance and trust of AI. The research objectives being addressed within this study to advance understanding of stakeholder level requirements are:
  • What are the social barriers and challenges hindering the trust and acceptance of AI within PC?
  • What are the specific requirements and expectations of different stakeholder levels within the PC domain for AI, considering their computing needs?
  • How can the specific requirements be addressed using explainable AI (XAI) techniques at each stakeholder level?
Having established the significance of AI acceptance in PC and the complex stakeholder landscape within the NHS, our study aimed to bridge this research gap through a comprehensive online survey and in-depth interviews, as outlined in the following methods section.

2. Methods

To address this research gap, we conducted an online survey to capture the perspectives of stakeholders at all levels of PC [9]. The survey questions were formulated based on a conceptual framework (as shown in Figure 1) which integrated two prominent models: the Technology Acceptance Model 3 (TAM3) [49] and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [50]. The purpose of combining these models was to gain a more comprehensive and a deeper understanding of how stakeholders perceived AI both individually and across various organisational levels. TAM3 was developed to help managers understand the factors influencing technology acceptance. Several characteristics are included in the model, including perceptions of external control and computer anxiety. In UTAUT2, characteristics such as ‘social influence’ and ‘facilitating conditions’ are introduced offering insights into the acceptance of technology from an individualistic perspective. Recognising the critical role of trust in AI acceptance, we also incorporated the trust characteristics of “fairness”, “accountability”, “transparency” and “ethical considerations”, combined with the TAM3, UTAUT2 model, as trust emerged as a key factor affecting acceptance of AI within the existing literature.
Using the conceptual framework, a series of hypotheses were defined, which framed the online survey questions into seven key question areas, including levels of acceptance, degrees of influence, barriers, and benefits of AI for PC. The survey questions were created using a systematic iterative approach drawing upon constructs and questions from previously validated surveys [51,52]. To ensure the quality and rigour of our survey, we collaborated with three AI experts to develop the questions and obtained feedback from the Open University Human Research Ethics Committee (HREC). Furthermore, we conducted a pre-test involving five experts in the field of PC before securing HREC approval. The survey was a comprehensive mix of question types, including closed-ended, open-ended, Likert scale, rating, and multiple-choice questions. To ensure its effectiveness, the survey underwent thorough piloting and testing with PC employees and colleagues prior to its release. The survey was created and hosted on JISC and a link was posted on Prolific to recruit participants exclusively from PC. We used a pre-screener survey to ensure that only PC participants were engaged. The participant information and consent forms were placed at the start of the survey to enable the participants to make an informed decision about proceeding and choose whether to continue with the survey. A few participants were recruited through snowballing, using email invitations sent to known employees from PC. The email invite asked the recipient to forward the email to colleagues in PC who might be interested and may wish to take part. The link to the survey directed the snowballing participants to the JISC website, where the survey questions, along with the participant information and consent form, were accessible at the start of the survey. This study used a mixed methods between-subjects approach which enabled levels of influence and acceptance to be understood across the organisational levels. Gaining an understanding of which stakeholder level holds the influence over AI acceptance enables us to target the identified barriers specific to that stakeholder level group. After cleaning and coding the survey data, we conducted statistical analysis on the quantitative data using descriptive and inferential statistic techniques [53]. While thematic analysis techniques were used to identify themes within the qualitative data, particularly from open-ended questions [54]. The findings stemming from this analysis are discussed in the next section.

3. Results

A total of 131 participants took part in the online survey, comprising 31 macro-level, 48 meso-level and 52 micro-level stakeholders. The survey employed Likert scale and open-ended questions to gather the feedback from the participants. Descriptive analysis of the data revealed that 74% of participants were female, and 23% were male. The data suggests that AI is being used within daily lives with 49% of respondents reporting that they use AI (as detailed in Figure 2). However, it is worth noting that a small proportion of respondents (5%) expressed uncertainty about their AI usage. The top three areas where respondents reported using AI in their daily lives included banking (18%), voice assistants (18%) and maps (15%). When participants were specifically asked about the use of AI in PC only 22 % of respondents believed AI was being used (as shown in Figure 2). A closer examination of where stakeholder’s believed AI was being used in PC showed that macro and meso level believed AI was used for administration tasks and micro level believed it was used for triaging patients. This finding reflects a disparity between the use of AI in general life and its application within PC. Furthermore, when the participants were asked about their understanding of AI, it was evident that despite using AI in their daily lives, not all participants fully grasped the concept of AI. Specifically, 35% of macro-level, 21% of meso-level, and 23% of micro-level participants reported having some degree of uncertainty regarding AI. Inquiring about the factors that would enable them to use AI in PC, macro level highlighted the need for infrastructure and training (19%), whereas meso level (31%) and micro level (25%) expressed a preference for AI that could help reduce waiting lists.
A scoping review by Lebcir et al. shed light on varying levels of influence as well as organisational factors across PC stakeholder levels, revealing barriers to AI acceptance [55]. Stakeholders at different levels exhibited diverse and sometimes conflicting perspectives on AI’s opportunities and limitations within PC. This has been highlighted within the survey results with macro stakeholders identifying a lack of acceptance from both staff and patients as the most significant barrier. In contrast, meso level stakeholders pointed to AI errors as the biggest factor affecting AI acceptance. Micro level stakeholders, on the other hand, believed that the lack of empathy associated with AI use was the biggest limitation (as presented in Figure 3).
The discernible outcome of the online survey suggested that further investigation was required, especially for meso level stakeholders’ views as they have the most influence over AI acceptance (as depicted in Figure 4). Meso level stakeholders are aware of their significant role in AI integration, they know that they have the influence over the integration of AI, but they are also concerned about potential AI errors. One respondent, P119, makes this case clearly and expressed this concern succinctly in the survey response “I feel that AI is risky within PC because every individual is different and has a different clinical demographic. Whilst one treatment or diagnosis can be useful in one patient, the same might not apply to another. I believe the consequences of AI can be life-threatening if not used correctly”.
This makes sense when looking at the role of meso level managers as they are responsible for the audits and reporting to macro level, and they could be held accountable for any errors made by AI. Even if AI was proven to be successful, they would still be concerned as they could lose their job if an AI could complete their tasks more efficiently. Another respondent, P129, voiced a similar sentiment and stated: “there is a risk that AI could be used to replace human healthcare workers, rather than augment their skills, which could have negative implications for the quality of patient care”.
These apprehensions were echoed by several other participants, including P74, who mentioned “the reduction of job roles for current staff”, P89, who highlighted the possibility of the workforce losing their job “workforce could lose their jobs”, while P106 and P108, expressed concerns about people not wanting AI to take their jobs: P106 “staff could be lost” and P108 “people would not want AI to take their job”.
Macro level stakeholders were more concerned about the acceptance of AI, as reflected in the comments from several participants. For instance, P66 expressed that “People might be apprehensive about using it”, P83 described “Peoples reluctance to accept change”, and P103 mentioned “Barriers to change” within PC. The lack of acceptance would make sense as they are promoting the use of AI as a benefit to the NHS, therefore if meso level stakeholders do not embrace AI and advocate for its adoption among micro level stakeholders, the AI will be a failure. A key factor affecting the acceptance of AI is the lack of understanding of AI which was described as a barrier by participants across all stakeholder levels. This lack of understanding was consistently described as a key finding in the study by Morrison et al. [14].
Micro level stakeholders identified empathy as a key limitation associated with AI. Micro level stakeholders are the stakeholders with the most contact with patients. A major part of their interactions with patients is through communication, asking the right questions and more importantly listening to the patient’s needs and concerns. One participant, P34, articulated this concern clearly in their response to the survey emphasizing the importance of ethical decision-making and the complexity of human decision processes. They questioned whether AI could be trusted to make the best decisions, as human decisions rely on both emotion and logic, informed by compassion. They highlighted the key aspect of human connection in patient care and expressed concerns that AI might diminish this connection, potentially overlooking essential aspects of what it means to be human: “Decisions made need to be ethical and this would be unknown from an AI. Human decision making is complex - would we trust an AI to make the best decision. Human decisions need to draw upon emotion as well as logic and need to be informed by compassion. Connection to other humans is key - would an AI reduce that and so miss important things about what it means to be human?” A key principle of patient-centered care is the ability to listen to the patient with a holistic approach. Micro stakeholders are concerned that AI would not be able to replicate this technique meaning patient care would be undermined.
Staff training was identified as a key factor by all levels of stakeholders which also linked with the fear of job loss due to being unable to use the AI effectively. This concern was expressed by all the stakeholder levels such as P76 “people being unable to use and losing jobs” (macro), P90 “staff who are not IT literate” (meso) and P32 “I think a general mistrust of “robots” among older healthcare workers would be a factor, as they are unlikely to want to do additional training and place their faith in a machine” (micro). This shared apprehension about the need for staff training and its implications for job security underscore the multifaceted challenges associated with AI integration in healthcare.
When analysing the perceived influence of stakeholders over the introduction of AI, Figure 4 offers a clear representation of the varying perspectives. Figure 4 clearly shows that meso level stakeholders believed they held the influence (42%), while only 13% of macro level stakeholders believed they had influence. Moreover, 44% of meso level stakeholders believed that macro stakeholders would want them to use AI, while 44% of micro stakeholders expressed their willingness to use AI if others were using it. The results clearly identify meso stakeholders as the key to acceptance of AI within PC, therefore engaging with meso-level stakeholders is crucial for the successful implementation of AI in this context. A further study is needed to validate these findings and to examine in more detail the factors that affect stakeholder perceptions of AI’s influence and trust. This follow-up study would delve into the intricacies of stakeholder perspectives to provide a deeper insight into the dynamics at play in AI acceptance and implementation within PC.
The participants were asked for their perceptions of what factors would enable them to employ AI within their specific job roles. This brought about some differences between the stakeholder levels with macro suggesting the important factors for them would be the ability to understand the decisions and adequate training on the systems to be used. The need for clarity was expressed and emphasised by several participants such as P71 who declared “It must be clear at all times”, while P96 wanted to have “A good understanding of how it works” and P100 suggested that AI decisions should be “Easy to understand why they have made those decisions”. This is not surprising as these findings are in line with the fact that 35% of macro level stakeholders reported they had a minimal understanding of AI.
Meso-level stakeholders identified specific factors critical for the trust and acceptance of AI. One key requirement was the ability of AI to demonstrate tangible improvements to be trusted and accepted by users. This perspective was explicitly communicated by P125 who described the proof of concept as “for AI to be accepted in my role I’d need to see evidence of how it could operate.”, with the general feeling from participants expressed succinctly by P73 who remarked that they would need “lots and lots of proof that it works.” Furthermore, P85 suggested that “trust, accuracy and ease of use for staff” would be required for NHS stakeholders, while P122 advocated that “acceptance by patients” would be needed. These insights shed light on the specific expectations of meso-level stakeholders regarding AI adoption.
Micro-level stakeholders, like their counterparts at other levels, also articulated several key requirements for enabling AI integration. These included training, trust, and acceptance, as well as the ability to implement AI effectively within their workplace. The ability to implement AI includes factors such as the: associated costs of AI, the adequacy of the current IT infrastructure, the time required to implement the system and the need for appropriate staff training. Several micro level participants identified areas of implementation as key requirements for enabling AI integration. IT infrastructure was a requirement for P58 who stated that “Improved investment in IT in the NHS currently is very lacking” while, P26, described a current process declaring that AI would need to save time in the processes “It would need to save me time, not add to my workload. Current systems for example, people are doing things manually or on paper, then having to input data on to a computer system as well, which is time consuming and takes time away from patients.” Another participant, P32, also mentioned efficiency as a factor especially where costs are concerned remarking “Efficiency, it would need to be proven to be efficient and cost effective before being rolled out”, while P101 suggested “Time to implement (training)”.
Figure 5. Categories identified as enablers of acceptance of AI in PC.
Figure 5. Categories identified as enablers of acceptance of AI in PC.
Preprints 90491 g005
When looking at the categories identified by participants as areas where AI could assist in PC, all stakeholder levels identified administrative tasks (as illustrated in Figure 6). This is not surprising as administrative tasks are usually the mundane tasks that need to be done and efficiently managed. Several participants from different stakeholder levels pointed to the potential benefits in this regard such as described by P76 “menial tasks and to be more productive” (macro) to enhance overall productivity, P105 discussed the “automation of routine tasks and processing of data” (meso) and P40 envisioned AI assisting “with the admin tasks involved in my job such as writing letters” (micro). Tasks associated with macro stakeholders as areas where AI could be beneficial included “record keeping” (P93), “ordering” (P38) and “transcribing patient notes / letters” (P77). While meso stakeholders’ tasks included other potential areas for AI support such as “new patient documentation / records – summarising” (P81), “data reporting. I regularly run activity reports for numbers of patients seen. Also, for the backing data for my invoices” (P42), “AI could help address a complaint from a patient” (P107) and “invoicing” (P116). Micro level stakeholders suggested tasks such as “creating timesheets and assigning sessions” (P52), “to book annual leave” (P63) and “patient info leaflets, referral letters” (P64) as areas where AI could play a constructive role in streamlining processes.
When participants were asked to identify which of the trust characteristics, they perceived to be most important, interestingly the results showed that there was no difference across the stakeholder levels. With ethical considerations regarded as the most important trust characteristic by all participants, followed by fairness, then accountability and finally transparency was perceived as the least important factor across all stakeholder levels (as displayed in Table 1). This alignment in perspectives regarding the order of importance for trust characteristics suggests a consensus on the core values and priorities across the stakeholder spectrum.

4. Discussion

4.1. Research Objective 1: Barriers and Challenges Affecting Trust and Acceptance

This research has gathered the views of all stakeholder levels within PC. The findings suggest that AI is being used in daily life, it is even increasingly prevalent in people’s daily lives. However, the stakeholders have also declared a lack of understanding about AI, which brings into question the reliability of the responses. According to the results, stakeholder perceptions of influence, trust, and intention to implement AI are influenced by their respective stakeholder levels within the healthcare systems. It was found that stakeholders at the meso level have the greatest perceived influence, emphasising the importance of engaging with this group to drive AI acceptance and adoption, so they should be prioritised for engagement. Moreover, the research highlights the interplay between trust and the intention to use AI: as some stakeholders within PC begin to accept AI, it can have a positive influence on others, ultimately promoting broader AI adoption. The study identifies several barriers to AI acceptance within PC. The perceived barriers to AI acceptance by PC were lack of acceptance by the users, lack of empathy for the patients and concerns about the possibility of AI errors. While ethical considerations and the fairness of the AI decisions were highlighted as influential characteristics to garner trust in AI. PC also felt that a lack of understanding about AI technology and limited resources were a hindrance to AI acceptance. Furthermore, PC suggested that education and training of healthcare professionals on AI are seen as necessary elements to ensure successful implementation of AI-related technology within PC. In summary, the research underscores the need for tailored strategies to engage stakeholders at different levels and build trust in AI technology to drive its effective adoption in PC. XAI techniques of explaining decisions has been highlighted as a means of garnering trust from stakeholders towards AI systems [39], while XAI has also been suggested as a potential solution for a patient-centered care approach utilising AI [48]. Therefore, further research in the field of explainability for PC would be beneficial.

4.2. Research Objective 2: Requirements and Expectation of Different Stakeholder Levels

Different stakeholder levels in PC have distinct requirements and expectations for AI, reflecting their unique job roles. Macro and micro level stakeholders would expect to have training on any new AI system deployed in PC, this is already being addressed with AI training to be given to all NHS employees at all levels [37]. Macro level also expressed a desire to be able to understand any decisions made by AI, with the current computing approaches of XAI helping to address this barrier, as articulated by Markus et al. in a paper describing how XAI could potentially be the key to creating trustworthy AI for healthcare scenarios [41]. While meso level would expect AI to demonstrate improvements before they would use AI. Demonstrating improvements would potentially address and satisfy another requirement from meso level stakeholders of acceptance and trust. Micro level stakeholders also require AI to be trustworthy for acceptance, while they also expect AI to have the ability to be implemented in a timely, cost effective and efficient manner, seamlessly integrating with the NHS infrastructure. These expectations compare with the findings of Morrison et al. in [14].
The infrastructure and capability of integrating AI into the numerous PC settings must be considered for the implementation of AI within PC. In addition to adhering to the universal guidelines and regulations, any AI developed for the NHS must also align with NHS-specific regulations [30,31,32,33,34,35]. Since 2013, the UK government has adopted a cloud first approach to technology, with an expectation that all systems should have the capability to communicate with each other, thus enabling more efficient data management. To ensure robust data management, standards such as data protection, technical security, interoperability, usability, and accessibility standards are addressed through the Digital Technology Assessment Criteria (DTAC), which focuses on data security [56]. Data security is considered to be the primary concern among patients, especially when AI is involved in processing their information [57]. Consequently, Application Program Interfaces (APIs) are used throughout the NHS to facilitate the integration of systems to safely exchange information. There are currently five Fast Healthcare Interoperability Resources (FHIR) standard APIs being used within PC:
  • GP Connect Access Document – retrieve unstructured documents from a patient’s record.
  • GP Connect Access Record: HTML – view a patient’s record with read only access.
  • GP Connect Access Record: Structured – retrieve structured information from a patient’s record.
  • National Data Opt-Out - capture patients’ preferences towards the sharing of their data for research purposes.
  • Summary Care Record – access an electronic record containing important patient information.
There is also one API in development for PC which grants access to a patient’s records ‘GP Connect (Patient Facing) Access Record’ [58]. The APIs use standard protocols including Internet Protocol (IP), Transport Layer Security (TLS), Simple Object Access Protocol (SOAP) and Lightweight Directory Access Protocol (LDAP). They enable stakeholders to access secure data through computing solutions such as Infrastructure as a Service (IaaS), Platforms as a Service (PaaS) and Software as a service (SaaS), providing scalable and cost-effective tools [59].
As more tasks or processes are delegated to AI systems, there will be a need for updated guidance and regulations to ensure accountability and fairness [60]. In the UK, the Data Ethics Framework currently provides guidance on ethical considerations, addressing key concerns of fairness, transparency, and accountability, while also emphasising the need for understandable explanations for all stakeholders [39].
The reasons that explanations may be required are classified in four areas by Adadi and Berada as explain to justify, explain to improve, explain to discover, or explain to control [61]. XAI techniques are categorised into local or global explanations approaches. The most common techniques for XAI explanations are feature relevance, example based, comparison based and counterfactual explanations (as shown in Figure 7) [62]. Petch et al. also explain that different stakeholders may require different explanations or multiple explanations. Our study has already identified different requirements from the different levels of stakeholders, therefore, identifying the techniques required for each stakeholder level may consolidate this assumption.

4.3. Research Objective 3: Addressing the Specific Requirements

The results of the study have identified several requirements but the key requirement across all stakeholder levels has been highlighted as a lack of understanding of AI, which all stakeholder levels suggested that training would resolve. A lack of understanding can impede trust in a system, however understandable explanations through XAI solutions have been put forward as a possible response to this problem [41]. The UK government is addressing the training requirements through the NHS AI Lab and other departments throughout the NHS [21,37].
To address the specific interests of PC stakeholders, we investigated the current research for possible solutions in the fields of AI and XAI. Administrative tasks were identified as a key area where AI could be utilized by all stakeholder levels and was the focus of a paper by Davenport and Kalakota who describe AI systems, giving examples of where the systems could be effectively utilised by healthcare [63]. A system that could be used for the mundane repetitive administration tasks was identified as “Robotic Process Automation (RPA)”. In secondary care RPA-systems have been used for updating patient records and billing. Machine learning has been used to match data across heterogenous databases for insurance audits, therefore it would be useful for claims audits for PC. Another area that was identified by the stakeholders for AI to be utilised was in booking appointments. Natural Language Processing (NLP) applications such as chatbots could be used to allow a patient to book an appointment. Chatbots could also be deployed for simple tasks such as ordering repeat prescriptions [63]. However, it’s important to note that the successful implementation of chatbots and similar AI driven solutions in healthcare would require acceptance from patients. Consequently, further research and efforts would be necessary to explore patient attitudes and perceptions in this area to ensure successful adoption.
In a study conducted by Pyne et al. several NLP text classifiers were evaluated in the context of dialogues between medical practitioners and patients. This research is in the early stages and therefore the disappointing results at 57% accuracy indicate that further research in this field is called for [64]. A scoping review by Sarensen et al. centered on the use of supervised machine learning techniques for administrative tasks within PC [65]. The findings of this review by Sarensen et al. revealed that research in machine learning for administrative tasks is currently limited. However, insights garnered from the work of Willis et al. suggest that an approximate 44% of the administrative tasks and processes within PC exhibit potential for automation [66].
XAI models for administrative tasks in areas other than healthcare have already begun to be explored such as: data management, record keeping, financial decision making, auditing, text classification and for hiring staff [67,68,69,70,71,72]. Indeed, further research in the areas of machine learning, Natural Language Processing (NLP), chatbots, and Robotic Process Automation (RPA) is warranted, particularly considering these AI approaches have been identified as valuable tools for handling administrative tasks within PC. Investigating the specific applications, challenges, and benefits of these technologies in the context of healthcare can contribute to a more comprehensive understanding of their potential impact on improving administrative efficiency and overall patient care in PC settings. Such research can inform the development of targeted strategies for implementing and optimising XAI solutions in these areas.
The next step in the research will involve conducting in-depth interviews with participants who previously took the survey to enable a deeper analysis of the data, following the framework outlined in Figure 8. Scenarios will be used to describe AI systems covering the four key models of AI tools, which are classification, prediction, optimisation, and regenerative models. The scenarios will be drawn from sectors other than healthcare, with the participants encouraged to look at parts of the systems described for usefulness within their specific job roles in the healthcare domain. By employing scenarios from different sectors, bias within the interview questions will be minimized. This approach empowers participants to initiate discussions about processes and tasks relevant to their roles, ensuring a more natural and informative conversation. The outcomes of the online survey and the subsequent in-depth interviews will be combined and a mixed methods between subjects’ analysis will look at the factors affecting trust of AI. Consequently, thematic analysis will be employed to identify areas within PC where XAI could be useful and could prove beneficial to the stakeholders, enabling a more thorough requirements analysis. It is clear from the online survey results that understanding, and patient-centered care are key concerns for PC stakeholders. XAI could potentially address these concerns. Therefore, further research into the format of explanations and if more than one explanation is needed will be useful to take this research to the next phase. Understanding the requirements that are most important to the stakeholders will enable possible solutions to be identified using XAI techniques.

Author Contributions

This research article was conceptualized by T.S. and D.K. The methodology; investigation; resources; formal analysis; data curation; and original written draft preparation were completed by T.S. The research project was supervised and administrated by D.K., T.F. and A.T. The final written article was reviewed and edited by D.K and T.F. The article was finally validated by all the authors, T.S, D.K., T.F. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been carried out with financial support from EPSRC Training Grant DTP 2020-2021 Open University.

Acknowledgments

I would like to thank Simon Sides BSc. For all his support throughout my research journey.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bohr, A.; Memarzadeh, K. The Rise of Artificial Intelligence in Healthcare Applications. Artificial Intelligence in Healthcare 2020, 25–60. [Google Scholar] [CrossRef]
  2. Reddy, S.; Fox, J.; Purohit, M. P. Artificial Intelligence-Enabled Healthcare Delivery. J R Soc Med 2019, 112, 22–28. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, Y.; Stavropoulou, C.; Narasinkan, R.; Baker, A.; Scarbrough, H. Professionals’ Responses to the Introduction of AI Innovations in Radiology and Their Implications for Future Adoption: A Qualitative Study. BMC Health Serv Res 2021, 21. [Google Scholar] [CrossRef]
  4. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial Intelligence in Healthcare: Transforming the Practice of Medicine. Future Healthc j 2021, 8, e188–e194. [Google Scholar] [CrossRef] [PubMed]
  5. Kusunose, K. Radiomics in Echocardiography: Deep Learning and Echocardiographic Analysis. Curr Cardiol Rep 2020, 22, 89:6. [CrossRef]
  6. Rainey, C.; O’Regan, T.; Matthew, J.; Skelton, E.; Woznitza, N.; Chu, K.-Y.; Goodman, S.; McConnell, J.; Hughes, C.; Bond, R.; Malamateniou, C.; McFadden, S. An Insight into the Current Perceptions of UK Radiographers on the Future Impact of AI on the Profession: A Cross-Sectional Survey. J Med Imaging Radiat Sci 2022, 53, 347–361. [Google Scholar] [CrossRef] [PubMed]
  7. Scheetz, J.; Rothschild, P.; McGuinness, M.; Hadoux, X.; Soyer, H. P.; Janda, M.; Condon, J. J. J.; Oakden-Rayner, L.; Palmer, L. J.; Keel, S.; van Wijngaarden, P. A Survey of Clinicians on the Use of Artificial Intelligence in Ophthalmology, Dermatology, Radiology and Radiation Oncology. Sci Rep 2021, 11, 5193:10. [Google Scholar] [CrossRef]
  8. Kueper, J. K.; Terry, A.; Zwarenstein, M.; Lizotte, D. J. Artificial Intelligence and Primary Care Research: A Scoping Review. Ann Fam Med 2020, 18, 250–258. [Google Scholar] [CrossRef]
  9. Sides, T.; Farrell, T.; Kbaier, D. Understanding the Acceptance of Artificial Intelligence in Primary Care. In Communications in Computer and Information Science, Proceedings of HCII 2023, Copenhagen, Denmark, 23-28 July 2023; Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer: Cham, Switzerland, 2023; pp. 512–518. [Google Scholar] [CrossRef]
  10. NHS. NHS Long Term Plan. Available online: https://www.longtermplan.nhs.uk (accessed on 22 May 2022).
  11. UK government. National AI Strategy - AI Action Plan. Available online: https://www.gov.uk/government/publications/national-ai-strategy-ai-action-plan/national-ai-strategy-ai-action-plan (accessed on 16 December 2022).
  12. Kumar, P.; Chauhan, S.; Awasthi, L. K. Artificial Intelligence in Healthcare: Review, Ethics, Trust Challenges & Future Research Directions. Eng Appl Artif Intell 2023, 120, 105894:20. [Google Scholar] [CrossRef]
  13. NHS England. Structure of the NHS. Available online: https://www.england.nhs.uk/long-read/structure-of-the-nhs/ (accessed on 17 October 2023).
  14. Morrison, K. Artificial Intelligence and the NHS: A Qualitative Exploration of the Factors Influencing Adoption. Future Healthc J 2021, 8, e648–e654. [Google Scholar] [CrossRef]
  15. Panch, T.; Mattie, H.; Celi, L. A. The “Inconvenient Truth” about AI in Healthcare. npj Digit. Med. 2019, 2, 1–3. [Google Scholar] [CrossRef]
  16. Bagenal, J.; Naylor, A. Harnessing the Value of NHS Patient Data. The Lancet 2018, 392, 2420–2422. [Google Scholar] [CrossRef]
  17. UK Government. National Data Strategy. Available online: https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy (accessed on 8 November 2023).
  18. British Medical Association. How the NHS works. Available online: https://www.bma.org.uk/advice-and-support/international-doctors/life-and-work-in-the-uk/toolkit-for-doctors-new-to-the-uk/how-the-nhs-works (accessed on 3 February 2023).
  19. NHS Digital. General Practice Workforce, 31 May 2023. Available online: https://digital.nhs.uk/data-and-information/publications/statistical/general-and-personal-medical-services/31-may-2023 (accessed on 24 October 2023).
  20. Asthana, S.; Jones, R.; Sheaff, R. Why Does the NHS Struggle to Adopt eHealth Innovations? A Review of Macro, Meso and Micro Factors. BMC Health Serv Res 2019, 19, 984:7. [Google Scholar] [CrossRef]
  21. NHS England. NHS AI Lab roadmap. Available online: https://transform.england.nhs.uk/ai-lab/nhs-ai-lab-roadmap/ (accessed on 28 October 2022).
  22. Liyanage, H.; Liaw, S.-T.; Jonnagaddala, J.; Schreiber, R.; Kuziemsky, C.; Terry, A. L.; de Lusignan, S. Artificial Intelligence in Primary Health Care: Perceptions, Issues, and Challenges. Yearb Med Inform 2019, 28, 41–46. [Google Scholar] [CrossRef]
  23. Darcel, K.; Upshaw, T.; Craig-Neil, A.; Macklin, J.; Gray, C. S.; Chan, T. C. Y.; Gibson, J.; Pinto, A. D. Implementing Artificial Intelligence in Canadian Primary Care: Barriers and Strategies Identified through a National Deliberative Dialogue. PLoS One 2023, 18, e0281733–e0281733. [Google Scholar] [CrossRef] [PubMed]
  24. Buck, C.; Doctor, E.; Hennrich, J.; Jöhnk, J.; Eymann, T. General Practitioners’ Attitudes Toward Artificial Intelligence–Enabled Systems: Interview Study. J Med Internet Res 2022, 24, e28916:18. [Google Scholar] [CrossRef] [PubMed]
  25. Pedro, A. R.; Dias, M. B.; Laranjo, L.; Cunha, A. S.; Cordeiro, J. V. Artificial Intelligence in Medicine: A Comprehensive Survey of Medical Doctor’s Perspectives in Portugal. PLoS ONE 2023, 18, e0290613–e0290613. [Google Scholar] [CrossRef]
  26. Catalina, Q. M.; Fuster-Casanovas, A.; Vidal-Alaball, J.; Escalé-Besa, A.; Marin-Gomez, F. X.; Femenia, J.; Solé-Casals, J. Knowledge and Perception of Primary Care Healthcare Professionals on the Use of Artificial Intelligence as a Healthcare Tool. Digit Health 2023, 9, 20552076231180511:11. [Google Scholar] [CrossRef]
  27. Martinho, A.; Kroesen, M.; Chorus, C. A Healthy Debate: Exploring the Views of Medical Doctors on the Ethics of Artificial Intelligence. Artif Intell Med 2021, 121, 102190:10. [Google Scholar] [CrossRef] [PubMed]
  28. Kolbjørnsrud, V.; Amico, R.; Thomas, R. J. Partnering with AI: How Organizations Can Win over Skeptical Managers. Strategy & Leadership 2017, 45, 37–43. [Google Scholar] [CrossRef]
  29. Ferreira, H.; Ruivo, P.; Reis, C. How Do Data Scientists and Managers Influence Machine Learning Value Creation? Procedia Comput Sci 2021, 181, 757–764. [Google Scholar] [CrossRef]
  30. European Union. General Data Protection Regulation (GDPR). Available online: https://gdpr-info.eu/ (accessed on 31 October 2023).
  31. UK Government. The Privacy and Electronic Communications (EC Directive) Regulations 2003. Available online: https://www.legislation.gov.uk/uksi/2003/2426/contents/made (accessed on 31 October 2023).
  32. UK Government. Public Records Act 1958. Available online: https://www.legislation.gov.uk/ukpga/Eliz2/6-7/51 (accessed on 31 October 2023).
  33. UK Government. National Health Service Act 2006. Available online: https://www.legislation.gov.uk/ukpga/2006/41/contents (accessed on 31 October 2023).
  34. UK Government. Health and Social Care (Quality and Engagement) (Wales) Act 2020. Available online: https://www.legislation.gov.uk/asc/2020/1/contents/enacted (accessed on 27 October 2022).
  35. UK Government. Confidentiality: NHS Code of Practice. Available online: https://www.gov.uk/government/publications/confidentiality-nhs-code-of-practice (accessed on 31 October 2023).
  36. Ganapathi, S.; Duggal, S. Exploring the Experiences and Views of Doctors Working with Artificial Intelligence in English Healthcare; a Qualitative Study. PLoS One 2023, 18, e0282415:17. [Google Scholar] [CrossRef] [PubMed]
  37. Waters, A. AI Technologies: Guidelines Set out Training Requirements for NHS Staff. BMJ 2022, 379, o2560. [Google Scholar] [CrossRef] [PubMed]
  38. Leslie, D.; Mazumder, A.; Peppin, A.; Wolters, M. K.; Hagerty, A. Does “AI” Stand for Augmenting Inequality in the Era of Covid-19 Healthcare? BMJ 2021, 372, n304:5. [Google Scholar] [CrossRef]
  39. UK Government. Data Ethics Framework. Available online: https://www.gov.uk/government/publications/data-ethics-framework (accessed on 27 October 2023).
  40. von Eschenbach, W. J. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  41. Markus, A. F.; Kors, J. A.; Rijnbeek, P. R. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. J Biomed Inform 2021, 113, 103655:11. [Google Scholar] [CrossRef]
  42. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V. I. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Med Inform Decis Mak 2020, 20, 310:9. [Google Scholar] [CrossRef]
  43. Kerasidou, A. Artificial Intelligence and the Ongoing Need for Empathy, Compassion and Trust in Healthcare. Bull World Health Organ 2020, 98, 245–250. [Google Scholar] [CrossRef]
  44. Coulter, A.; Oldham, J. Person-Centred Care: What Is It and How Do We Get There? Future Hosp J 2016, 3, 114–116. [Google Scholar] [CrossRef]
  45. Montemayor, C.; Halpern, J.; Fairweather, A. In Principle Obstacles for Empathic AI: Why We Can’t Replace Human Empathy in Healthcare. AI & Soc 2022, 37, 1353–1359. [Google Scholar] [CrossRef]
  46. Yang, R.; Wibowo, S. User Trust in Artificial Intelligence: A Comprehensive Conceptual Framework. Electron Markets 2022, 32, 2053–2077. [Google Scholar] [CrossRef]
  47. Hashim, M. J. Patient-Centered Communication: Basic Skills. Am Fam Physician 2017, 95, 29–34. [Google Scholar] [PubMed]
  48. Alam, L.; Mueller, S. Examining the Effect of Explanation on Satisfaction and Trust in AI Diagnostic Systems. BMC Med Inform Decis Mak 2021, 21. [Google Scholar] [CrossRef] [PubMed]
  49. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis Sci 2008, 39, 273–315. [Google Scholar] [CrossRef]
  50. Venkatesh, V.; Thong, J. Y. L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q 2012, 36, 157–178. [Google Scholar] [CrossRef]
  51. Abdekhoda, M.; Dehnad, A.; Zarei, J. Determinant Factors in Applying Electronic Medical Records in Healthcare. East Mediterr Health J. 2018, 25, 24–33. [Google Scholar] [CrossRef]
  52. Lewis, J. R. Comparison of Four TAM Item Formats: Effect of Response Option Labels and Order. J. Usability Stud 2019, 14, 224–236. [Google Scholar]
  53. Mishra, P.; Pandey, C. M.; Singh, U.; Keshri, A.; Sabaretnam, M. Selection of Appropriate Statistical Methods for Data Analysis. Ann Card Anaesth 2019, 22, 297–301. [Google Scholar] [CrossRef] [PubMed]
  54. Braun, V.; Clarke, V. Thematic Analysis: A Practical Guide, 1st ed.; Sage Publications Ltd: London, England, 2021; ISBN 978-1-4739-5323-9. [Google Scholar]
  55. Lebcir, R.; Hill, T.; Atun, R.; Cubric, M. Stakeholders’ Views on the Organisational Factors Affecting Application of Artificial Intelligence in Healthcare: A Scoping Review Protocol. BMJ Open 2021, 11, e044074:6. [Google Scholar] [CrossRef]
  56. NHS. Digital Technology Assessment Criteria (DTAC). Available online: https://transform.england.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/ (accessed on 2 November 2023).
  57. Musbahi, O.; Syed, L.; Le Feuvre, P.; Cobb, J.; Jones, G. Public Patient Views of Artificial Intelligence in Healthcare: A Nominal Group Technique Study. Digit Health 2021, 7, 20552076211063682–11. [Google Scholar] [CrossRef]
  58. NHS Digital. API catalogue. Available online: https://digital.nhs.uk/developer/api-catalogue (accessed on 3 November 2023).
  59. Nadeem, F. Evaluating and Ranking Cloud IaaS, PaaS and SaaS Models Based on Functional and Non-Functional Key Performance Indicators. IEEE Access 2022, 10, 63245–63257. [Google Scholar] [CrossRef]
  60. Mckee, M.; Wouters, O. J. The Challenges of Regulating Artificial Intelligence in Healthcare: Comment on “Clinical Decision Support and New Regulatory Frameworks for Medical Devices: Are We Ready for It? - A Viewpoint Paper”. Int J Health Policy Manag 2023, 12, 1–4. [Google Scholar] [CrossRef]
  61. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  62. Petch, J.; Di, S.; Nelson, W. Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology. Can J Cardiol 2022, 38, 204–213. [Google Scholar] [CrossRef] [PubMed]
  63. Davenport, T.; Kalakota, R. The Potential for Artificial Intelligence in Healthcare. Future Healthc J. 2019, 6, 94–98. [Google Scholar] [CrossRef] [PubMed]
  64. Pyne, Y.; Wong, Y. M.; Fang, H.; Simpson, E. Analysis of ‘One in a Million’ Primary Care Consultation Conversations Using Natural Language Processing. BMJ Health Care Inform 2023, 30, e100659:10. [Google Scholar] [CrossRef] [PubMed]
  65. Sarensen, N. L.; Bemman, B.; Jensen, M. B.; Moeslund, T. B.; Thomsen, J. L. Machine Learning in General Practice: Scoping Review of Administrative Task Support and Automation. BMC Prim Care 2023, 24, 14:14. [Google Scholar] [CrossRef]
  66. Willis, M.; Duckworth, P.; Coulter, A.; Meyer, E. T.; Osborne, M. Qualitative and Quantitative Approach to Assess the Potential for Automating Administrative Tasks in General Practice. BMJ Open 2020, 10, e032412:9. [Google Scholar] [CrossRef]
  67. Bertossi, L.; Geerts, F. Data Quality and Explainable AI. J. ACM J Data Inf Qual 2020, 12, 11:1–11:9. [Google Scholar] [CrossRef]
  68. Bunn, J. Working in Contexts for Which Transparency Is Important: A Recordkeeping View of Explainable Artificial Intelligence (XAI). Records Management Journal 2020, 30, 143–153. [Google Scholar] [CrossRef]
  69. Ohana, J. J.; Ohana, S.; Benhamou, E.; Saltiel. D.; Guez, B. Explainable AI (XAI) Models Applied to the Multi-agent Environment of Financial Markets. In Explainable and Transparent AI and Multi-Agent Systems, Calvaresi, D., Najjar, A., Winikoff, M., Främling, K., Eds.; Springer International Publishing, Cham, Switzerland, 2021; pp 189-207, ISBN 978-3-030-82017-6.
  70. Zhang, C. (Abigail); Cho, S.; Vasarhelyi, M. Explainable Artificial Intelligence (XAI) in Auditing. International Journal of Accounting Information Systems 2022, 46, 100572:22. [Google Scholar] [CrossRef]
  71. Mahoney, C. J.; Zhang, J.; Huber-Fliflet, N.; Gronvall, P.; Zhao, H. A Framework for Explainable Text Classification in Legal Document Review. In 2019 IEEE International Conference on Big Data (Big Data); IEEE: Los Angeles, CA, USA, 2019. [Google Scholar] [CrossRef]
  72. Hofeditz, L.; Clausen, S.; Rieß, A.; Mirbabaie, M.; Stieglitz, S. Applying XAI to an AI-Based System for Candidate Management to Mitigate Bias and Discrimination in Hiring. Electron Markets 2022, 32, 2207–2233. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Proposed conceptual framework combing Technology Acceptance Model 3 (TAM3), Unified Theory of Acceptance and Use of Technology-2(UTAUT2) and trust characteristics.
Figure 1. Proposed conceptual framework combing Technology Acceptance Model 3 (TAM3), Unified Theory of Acceptance and Use of Technology-2(UTAUT2) and trust characteristics.
Preprints 90491 g001
Figure 2. Perceived use of AI in general and in PC for stakeholder levels.
Figure 2. Perceived use of AI in general and in PC for stakeholder levels.
Preprints 90491 g002
Figure 3. Barriers to acceptance identified for each stakeholder level.
Figure 3. Barriers to acceptance identified for each stakeholder level.
Preprints 90491 g003
Figure 4. Assessing the influence of different stakeholder levels on the introduction of AI in PC.
Figure 4. Assessing the influence of different stakeholder levels on the introduction of AI in PC.
Preprints 90491 g004
Figure 6. Category identified by participants for the introduction of AI in PC.
Figure 6. Category identified by participants for the introduction of AI in PC.
Preprints 90491 g006
Figure 7. XAI explanation techniques.
Figure 7. XAI explanation techniques.
Preprints 90491 g007
Figure 8. Framework for the acceptance of AI and XAI in PC.
Figure 8. Framework for the acceptance of AI and XAI in PC.
Preprints 90491 g008
Table 1. Trust characteristics perceived to be most important.
Table 1. Trust characteristics perceived to be most important.
Macro Meso Micro Population
Fairness 10 15 16 37
Accountability 4 7 7 22
Transparency 1 2 2 9
Ethics 16 24 27 63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated