1. Introduction
In recent years, Artificial Intelligence (AI) deployment in the healthcare sector has witnessed a substantial surge, albeit with a predominant emphasis on secondary care [
1,
2,
3,
4,
5,
6,
7]. Research in primary care (PC) is in the early stages internationally [
8] and has been largely disregarded in the United Kingdom (UK) [
9]. This lack of acceptance of AI technology contradicts the long term plan of the National Health Service (NHS) that puts PC digitalisation as a top priority [
10]. Notably, the UK government has articulated the potential of AI to enhance the quality and quantity of healthcare services across the NHS [
11]. There is, however, considerable unexplored potential for AI in PC settings in the UK, within diagnostics and systems to improve efficiency [
12]. Given the structural and organisational differences between secondary and PC, it is crucial to understand the factors that may influence AI technology acceptance in PC [
13,
14].
Panch et al. describe the NHS system as fragmented, attributing this fragmentation to the vast amounts of data scattered across numerous locations, which acts as a key barrier to AI integration [
15]. This issue was also highlighted by the UK government in 2018 when the House of Lords Select Committee released a report investigating NHS data and its ownership, ultimately leading to the enactment of new legislation designed to protect all UK data, the National Data Strategy [
16,
17]. Through the National Data Strategy, the government aims to improve the quality of data, enabling the efficient use of data to unlock its value and potential for growth [
17]. It is possible that the structure of PC, which is a sector of private businesses operating under one main contract with the NHS, may negatively impact the quality of the data available and potentially influence the acceptance of AI. Currently, the NHS consists of four distinct healthcare systems run by the four governments of the UK [
18]. According to NHS Digital, there were 6,376 PC practices in England in 2022 [
19], each operating independently. The management of each practice decides on the Electronic Health Record (EHR) that is used, the additional software systems to link to the EHR, the number and type of staff required for the practice, making integration and resource sharing across PC difficult.
Asthana et al. describe the stakeholder levels of the NHS and PC as macro, meso and micro levels [
20]. The UK government have established departments dedicated to governing, assisting with implementation, and conducting research for AI within healthcare, such as the AI strategy and the NHS AI Lab [
11,
21]. These departments serve as critical players at the macro level, primarily comprised of NHS policymakers, seeking to optimise technology utilisation [
11]. Meso-level stakeholders include PC managers, technology providers, and secondary care health boards. So far, the research focus has predominantly delved into the perspectives of technology providers [
22]. The micro level, on the other hand, consists of individual stakeholders, so perspectives may vary, reflecting differing expectations and opinions regarding AI. Micro level stakeholders consist of all users of PC, such as doctors, nurses, administrators, and patients. Stakeholders within PC have access only to data that is required for their specific job role. Consequently, the complex organisational structure of the NHS and its multiple levels act as significant barriers to integrating technology into the healthcare system [
20,
23]. The following section explores the factors highlighted in the literature as barriers to AI acceptance.
1.1. Barriers to AI Acceptance within PC
When it comes to the acceptance of AI within PC, there have been numerous studies looking at the perspectives of doctors and micro level stakeholders [
24,
25,
26,
27]. However, meso level stakeholders have been disregarded, while macro level stakeholders are generally in favour of the integration of AI into PC [
11,
21]. Managers within PC are the key decision makers when it comes to new technologies being introduced. They are responsible for the general running of the business and for the dissemination of information from macro level to all micro level stakeholders. A major part of a manager’s role involves supplying information to macro level through audits and record keeping enabling payments for works completed. A study conducted by Kolbjørnsrud et al. investigated managers perceptions of AI across various sectors of society [
28]. The findings from the study revealed that macro level managers would accept and integrate AI into their job roles, whereas meso level managers expressed concerns that AI would replace their job functions. Even though this study was for areas outside of healthcare the perceptions of managers were similar to the findings of our online survey. Another study by Ferreira, Ruivo and Reis emphasised the significance of understanding perspectives from meso level stakeholders, even though their viewpoints are not always presented in the literature [
29]. They specifically looked at how machine learning could bring value to the data being held by businesses. The study found that data scientists and managers have differing viewpoints on the value of data, therefore, explaining how machine learning could create value may encourage acceptance and trust.
Morrison et al. conducted a series of 12 interviews to gather insights from macro level stakeholders regarding the barriers to AI adoption within the NHS [
14]. Regulatory constraints, inadequate training, cost concerns, and unsatisfactory IT infrastructure are some of the obstacles to AI adoption identified by the study, as well as a fundamental lack of understanding of AI and its associated terminology. To address these issues, it was suggested that a standardised terminology should be used by the NHS to describe AI and training should be provided for all users. Regulatory compliance within the NHS is complex, necessitating adherence to general regulations such as General Data Protection Regulations (GDPR), Privacy and Electronic Communications (EC Directive) Regulations 2003 and the Public Records Act 1958 [
30,
31,
32]. In addition to these, specific NHS regulations also need to be followed such as NHS Act 2006, Health and Social Care Act 2012 and Confidentiality: NHS Code of Practice 2003 [
33,
34,
35]. Regulatory standards were also recommended by Morrison et al. as a way of ensuring standardisation and the effective deployment of systems where they would be most beneficial and where they can make the most impact [
14]. In a separate study by Ganapathi and Duggal, doctors working with AI systems identified regulations and infrastructure as key barriers to AI implementation within the NHS [
36]. The NHS AI lab is already addressing these areas in collaboration with technology companies, academics, and various departments within the UK government [
21]. Facilitating a platform of collaboration, the NHS AI Lab empowers AI developers to engage in mutually beneficial cooperation, exchange best practices and stay abreast of pertinent guidance and regulations essential for the seamless integration of AI systems within the NHS. Moreover, the NHS AI Lab emphasises the necessity for tailored AI training modules designed to meet the diverse requirements of the stakeholders. This entails delineating strategies for the planning, development, and delivery of targeted AI training packages [
37].
While in a study by Darcel et al. the regulation of AI was discussed as a factor affecting stakeholders trust in AI, along with implementation requirements, possible bias or inequity and acceptance by stakeholders [
23]. The study found that implementation requirements could be a major problem area as each individual business may have differing infrastructure or data requirements. The inconsistency of the data across the PC sites could also create inequity and increase bias, as highlighted by Leslie et al. [
38], affecting the acceptance of AI by stakeholders.
The Data Ethics Framework sets out guidance on the transparency, fairness, and accountability of AI, to enable stakeholders to assess the ethical considerations of systems, while ensuring responsible and appropriate use of patient data [
39]. Within this framework, the concept of explainability is discussed to enhance the transparency of AI algorithms. Von Eschenbach posits a direct correlation between the opaqueness of black-box AI models, with the lack of trust for AI technologies, emphasising that eXplainable AI (XAI) can increase the transparency of AI systems [
40]. Markus and colleagues further expound on the notion of XAI as a mechanism for developing trustworthy AI for the healthcare sector [
41]. Amann et al. explored explainability within AI and concluded that fostering explainability ensures that patients remain central to healthcare delivery. [
42].
The current patient care process adopted by healthcare professionals is heavily focused on person-centered care and communication. Person-centered care considers all factors associated with a patient, enabling the tailoring of individualised decisions on the required care. As articulated by Coulter and Oldham, the importance of understanding the individuals’ characteristics when treating a patient is emphasised, echoing Hippocrates’ wisdom that “it is more important to know what sort of person has a disease than to know what sort of disease a person has”. The key characteristics associated with patient-centered care are empathy, compassion, and trust [
43]. When assessing the needs of a patient the conversation with the patient forms the foundation of person-centered care and fosters empathy [
44]. Empathy, as categorised by Montemayor et al., comprises emotional, cognitive, and motivational components [
45]. Emotional and motivational empathy stem from experiencing emotions that motivate us to show concern, while cognitive empathy involves detecting or recognising someone’s emotional state, which is a quality that could potentially be introduced to AI. In person-centered care, all three types of empathy are employed, providing a holistic approach. However, person-centered care, trust and empathy have been cited in the literature as potential barriers to the acceptance of AI in PC [
46]. The pursuit of efficiency in healthcare through the employment of AI has, in some cases, shifted the focus of the NHS away from empathy and patient care, potentially resulting in their neglect [
43]. Patient-centered care relies on the ability to explain the diagnosis, give justification for any advice, and offer alternative treatments if necessary [
47]. Alam and Mueller studied XAI explanations in the context of patient-centered care, comparing local and global explanations, given at different time points within the scenarios. They concluded that an XAI system would need to provide both local and global explanations at varying time points, based on the changing information provided by the patient. The format of the explanation was also important for the understanding and trust of the system. The findings mirrored the adaptive process used by doctors employing the patient-centered care method [
48].
1.2. Research Aims
This research aims to gain a better understanding of the different levels of stakeholders and their unique requirements for promoting acceptance and trust in AI within the context of PC. The existing literature indicates that there are numerous factors contributing to the lack of trust and acceptance of AI within PC. By gathering more perspectives from stakeholders across all organisational levels of PC, this research intends to explore whether there are differing requirements that could influence acceptance and trust of AI. The research objectives being addressed within this study to advance understanding of stakeholder level requirements are:
What are the social barriers and challenges hindering the trust and acceptance of AI within PC?
What are the specific requirements and expectations of different stakeholder levels within the PC domain for AI, considering their computing needs?
How can the specific requirements be addressed using explainable AI (XAI) techniques at each stakeholder level?
Having established the significance of AI acceptance in PC and the complex stakeholder landscape within the NHS, our study aimed to bridge this research gap through a comprehensive online survey and in-depth interviews, as outlined in the following methods section.
2. Methods
To address this research gap, we conducted an online survey to capture the perspectives of stakeholders at all levels of PC [
9]. The survey questions were formulated based on a conceptual framework (as shown in
Figure 1) which integrated two prominent models: the Technology Acceptance Model 3 (TAM3) [
49] and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [
50]. The purpose of combining these models was to gain a more comprehensive and a deeper understanding of how stakeholders perceived AI both individually and across various organisational levels. TAM3 was developed to help managers understand the factors influencing technology acceptance. Several characteristics are included in the model, including perceptions of external control and computer anxiety. In UTAUT2, characteristics such as ‘social influence’ and ‘facilitating conditions’ are introduced offering insights into the acceptance of technology from an individualistic perspective. Recognising the critical role of trust in AI acceptance, we also incorporated the trust characteristics of “fairness”, “accountability”, “transparency” and “ethical considerations”, combined with the TAM3, UTAUT2 model, as trust emerged as a key factor affecting acceptance of AI within the existing literature.
Using the conceptual framework, a series of hypotheses were defined, which framed the online survey questions into seven key question areas, including levels of acceptance, degrees of influence, barriers, and benefits of AI for PC. The survey questions were created using a systematic iterative approach drawing upon constructs and questions from previously validated surveys [
51,
52]. To ensure the quality and rigour of our survey, we collaborated with three AI experts to develop the questions and obtained feedback from the Open University Human Research Ethics Committee (HREC). Furthermore, we conducted a pre-test involving five experts in the field of PC before securing HREC approval. The survey was a comprehensive mix of question types, including closed-ended, open-ended, Likert scale, rating, and multiple-choice questions. To ensure its effectiveness, the survey underwent thorough piloting and testing with PC employees and colleagues prior to its release. The survey was created and hosted on JISC and a link was posted on Prolific to recruit participants exclusively from PC. We used a pre-screener survey to ensure that only PC participants were engaged. The participant information and consent forms were placed at the start of the survey to enable the participants to make an informed decision about proceeding and choose whether to continue with the survey. A few participants were recruited through snowballing, using email invitations sent to known employees from PC. The email invite asked the recipient to forward the email to colleagues in PC who might be interested and may wish to take part. The link to the survey directed the snowballing participants to the JISC website, where the survey questions, along with the participant information and consent form, were accessible at the start of the survey. This study used a mixed methods between-subjects approach which enabled levels of influence and acceptance to be understood across the organisational levels. Gaining an understanding of which stakeholder level holds the influence over AI acceptance enables us to target the identified barriers specific to that stakeholder level group. After cleaning and coding the survey data, we conducted statistical analysis on the quantitative data using descriptive and inferential statistic techniques [
53]. While thematic analysis techniques were used to identify themes within the qualitative data, particularly from open-ended questions [
54]. The findings stemming from this analysis are discussed in the next section.
3. Results
A total of 131 participants took part in the online survey, comprising 31 macro-level, 48 meso-level and 52 micro-level stakeholders. The survey employed Likert scale and open-ended questions to gather the feedback from the participants. Descriptive analysis of the data revealed that 74% of participants were female, and 23% were male. The data suggests that AI is being used within daily lives with 49% of respondents reporting that they use AI (as detailed in
Figure 2). However, it is worth noting that a small proportion of respondents (5%) expressed uncertainty about their AI usage. The top three areas where respondents reported using AI in their daily lives included banking (18%), voice assistants (18%) and maps (15%). When participants were specifically asked about the use of AI in PC only 22 % of respondents believed AI was being used (as shown in
Figure 2). A closer examination of where stakeholder’s believed AI was being used in PC showed that macro and meso level believed AI was used for administration tasks and micro level believed it was used for triaging patients. This finding reflects a disparity between the use of AI in general life and its application within PC. Furthermore, when the participants were asked about their understanding of AI, it was evident that despite using AI in their daily lives, not all participants fully grasped the concept of AI. Specifically, 35% of macro-level, 21% of meso-level, and 23% of micro-level participants reported having some degree of uncertainty regarding AI. Inquiring about the factors that would enable them to use AI in PC, macro level highlighted the need for infrastructure and training (19%), whereas meso level (31%) and micro level (25%) expressed a preference for AI that could help reduce waiting lists.
A scoping review by Lebcir et al. shed light on varying levels of influence as well as organisational factors across PC stakeholder levels, revealing barriers to AI acceptance [
55]. Stakeholders at different levels exhibited diverse and sometimes conflicting perspectives on AI’s opportunities and limitations within PC. This has been highlighted within the survey results with macro stakeholders identifying a lack of acceptance from both staff and patients as the most significant barrier. In contrast, meso level stakeholders pointed to AI errors as the biggest factor affecting AI acceptance. Micro level stakeholders, on the other hand, believed that the lack of empathy associated with AI use was the biggest limitation (as presented in
Figure 3).
The discernible outcome of the online survey suggested that further investigation was required, especially for meso level stakeholders’ views as they have the most influence over AI acceptance (as depicted in
Figure 4). Meso level stakeholders are aware of their significant role in AI integration, they know that they have the influence over the integration of AI, but they are also concerned about potential AI errors. One respondent
, P119
, makes this case clearly and expressed this concern succinctly in the survey response “
I feel that AI is risky within PC because every individual is different and has a different clinical demographic. Whilst one treatment or diagnosis can be useful in one patient, the same might not apply to another. I believe the consequences of AI can be life-threatening if not used correctly”.
This makes sense when looking at the role of meso level managers as they are responsible for the audits and reporting to macro level, and they could be held accountable for any errors made by AI. Even if AI was proven to be successful, they would still be concerned as they could lose their job if an AI could complete their tasks more efficiently. Another respondent, P129, voiced a similar sentiment and stated: “there is a risk that AI could be used to replace human healthcare workers, rather than augment their skills, which could have negative implications for the quality of patient care”.
These apprehensions were echoed by several other participants, including P74, who mentioned “the reduction of job roles for current staff”, P89, who highlighted the possibility of the workforce losing their job “workforce could lose their jobs”, while P106 and P108, expressed concerns about people not wanting AI to take their jobs: P106 “staff could be lost” and P108 “people would not want AI to take their job”.
Macro level stakeholders were more concerned about the acceptance of AI, as reflected in the comments from several participants. For instance, P66 expressed that “
People might be apprehensive about using it”, P83 described “
Peoples reluctance to accept change”, and P103 mentioned “
Barriers to change” within PC. The lack of acceptance would make sense as they are promoting the use of AI as a benefit to the NHS, therefore if meso level stakeholders do not embrace AI and advocate for its adoption among micro level stakeholders, the AI will be a failure. A key factor affecting the acceptance of AI is the lack of understanding of AI which was described as a barrier by participants across all stakeholder levels. This lack of understanding was consistently described as a key finding in the study by Morrison et al. [
14].
Micro level stakeholders identified empathy as a key limitation associated with AI. Micro level stakeholders are the stakeholders with the most contact with patients. A major part of their interactions with patients is through communication, asking the right questions and more importantly listening to the patient’s needs and concerns. One participant, P34, articulated this concern clearly in their response to the survey emphasizing the importance of ethical decision-making and the complexity of human decision processes. They questioned whether AI could be trusted to make the best decisions, as human decisions rely on both emotion and logic, informed by compassion. They highlighted the key aspect of human connection in patient care and expressed concerns that AI might diminish this connection, potentially overlooking essential aspects of what it means to be human: “Decisions made need to be ethical and this would be unknown from an AI. Human decision making is complex - would we trust an AI to make the best decision. Human decisions need to draw upon emotion as well as logic and need to be informed by compassion. Connection to other humans is key - would an AI reduce that and so miss important things about what it means to be human?” A key principle of patient-centered care is the ability to listen to the patient with a holistic approach. Micro stakeholders are concerned that AI would not be able to replicate this technique meaning patient care would be undermined.
Staff training was identified as a key factor by all levels of stakeholders which also linked with the fear of job loss due to being unable to use the AI effectively. This concern was expressed by all the stakeholder levels such as P76 “people being unable to use and losing jobs” (macro), P90 “staff who are not IT literate” (meso) and P32 “I think a general mistrust of “robots” among older healthcare workers would be a factor, as they are unlikely to want to do additional training and place their faith in a machine” (micro). This shared apprehension about the need for staff training and its implications for job security underscore the multifaceted challenges associated with AI integration in healthcare.
When analysing the perceived influence of stakeholders over the introduction of AI,
Figure 4 offers a clear representation of the varying perspectives.
Figure 4 clearly shows that meso level stakeholders believed they held the influence (42%), while only 13% of macro level stakeholders believed they had influence. Moreover, 44% of meso level stakeholders believed that macro stakeholders would want them to use AI, while 44% of micro stakeholders expressed their willingness to use AI if others were using it. The results clearly identify meso stakeholders as the key to acceptance of AI within PC, therefore engaging with meso-level stakeholders is crucial for the successful implementation of AI in this context. A further study is needed to validate these findings and to examine in more detail the factors that affect stakeholder perceptions of AI’s influence and trust. This follow-up study would delve into the intricacies of stakeholder perspectives to provide a deeper insight into the dynamics at play in AI acceptance and implementation within PC.
The participants were asked for their perceptions of what factors would enable them to employ AI within their specific job roles. This brought about some differences between the stakeholder levels with macro suggesting the important factors for them would be the ability to understand the decisions and adequate training on the systems to be used. The need for clarity was expressed and emphasised by several participants such as P71 who declared “It must be clear at all times”, while P96 wanted to have “A good understanding of how it works” and P100 suggested that AI decisions should be “Easy to understand why they have made those decisions”. This is not surprising as these findings are in line with the fact that 35% of macro level stakeholders reported they had a minimal understanding of AI.
Meso-level stakeholders identified specific factors critical for the trust and acceptance of AI. One key requirement was the ability of AI to demonstrate tangible improvements to be trusted and accepted by users. This perspective was explicitly communicated by P125 who described the proof of concept as “for AI to be accepted in my role I’d need to see evidence of how it could operate.”, with the general feeling from participants expressed succinctly by P73 who remarked that they would need “lots and lots of proof that it works.” Furthermore, P85 suggested that “trust, accuracy and ease of use for staff” would be required for NHS stakeholders, while P122 advocated that “acceptance by patients” would be needed. These insights shed light on the specific expectations of meso-level stakeholders regarding AI adoption.
Micro-level stakeholders, like their counterparts at other levels, also articulated several key requirements for enabling AI integration. These included training, trust, and acceptance, as well as the ability to implement AI effectively within their workplace. The ability to implement AI includes factors such as the: associated costs of AI, the adequacy of the current IT infrastructure, the time required to implement the system and the need for appropriate staff training. Several micro level participants identified areas of implementation as key requirements for enabling AI integration. IT infrastructure was a requirement for P58 who stated that “
Improved investment in IT in the NHS currently is very lacking” while, P26, described a current process declaring that AI would need to save time in the processes “
It would need to save me time, not add to my workload. Current systems for example, people are doing things manually or on paper, then having to input data on to a computer system as well, which is time consuming and takes time away from patients.” Another participant, P32, also mentioned efficiency as a factor especially where costs are concerned remarking “
Efficiency, it would need to be proven to be efficient and cost effective before being rolled out”, while P101 suggested “
Time to implement (training)”.
Figure 5.
Categories identified as enablers of acceptance of AI in PC.
Figure 5.
Categories identified as enablers of acceptance of AI in PC.
When looking at the categories identified by participants as areas where AI could assist in PC, all stakeholder levels identified administrative tasks (as illustrated in
Figure 6). This is not surprising as administrative tasks are usually the mundane tasks that need to be done and efficiently managed. Several participants from different stakeholder levels pointed to the potential benefits in this regard such as described by P76 “
menial tasks and to be more productive” (macro) to enhance overall productivity, P105 discussed the “
automation of routine tasks and processing of data” (meso) and P40 envisioned AI assisting “
with the admin tasks involved in my job such as writing letters” (micro). Tasks associated with macro stakeholders as areas where AI could be beneficial included “
record keeping” (P93), “
ordering” (P38) and “
transcribing patient notes / letters” (P77). While meso stakeholders’ tasks included other potential areas for AI support such as “
new patient documentation / records – summarising” (P81), “
data reporting. I regularly run activity reports for numbers of patients seen. Also, for the backing data for my invoices” (P42), “
AI could help address a complaint from a patient” (P107) and “
invoicing” (P116). Micro level stakeholders suggested tasks such as “
creating timesheets and assigning sessions” (P52), “
to book annual leave” (P63) and “
patient info leaflets, referral letters” (P64) as areas where AI could play a constructive role in streamlining processes.
When participants were asked to identify which of the trust characteristics, they perceived to be most important, interestingly the results showed that there was no difference across the stakeholder levels. With ethical considerations regarded as the most important trust characteristic by all participants, followed by fairness, then accountability and finally transparency was perceived as the least important factor across all stakeholder levels (as displayed in
Table 1). This alignment in perspectives regarding the order of importance for trust characteristics suggests a consensus on the core values and priorities across the stakeholder spectrum.