Preprint
Article

Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study

Altmetrics

Downloads

214

Views

154

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

16 August 2024

Posted:

16 August 2024

You are already at the latest version

Alerts
Abstract
Despite the potential benefits of generative Artificial Intelligence (genAI), concerns about its psy-chological impact on medical students, especially with regard to job displacement, are apparent. This pilot study, conducted in Jordan during July–August 2024, aimed to examine the specific fears, anxieties, mistrust, and ethical concerns medical students could harbor towards genAI. Using a cross-sectional survey design, data were collected from 164 medical students studying in Jordan across various academic years, employing a structured self-administered questionnaire with an internally consistent FAME scale—representing Fear, Anxiety, Mistrust, and Ethics and comprising 12 items, with three items for each construct. The results indicated variable levels of anxiety towards genAI among the participating medical students: 34.1% reported no anxiety about genAI role in their future careers (n = 56), while 41.5% were slightly anxious (n = 61), 22.0% somewhat anxious (n = 36), and 2.4% extremely anxious (n = 4). Among the FAME constructs, Mistrust was the most agreed upon (mean: 12.35±2.78), followed by Ethics construct (mean: 10.86±2.90), Fear (mean: 9.49±3.53), and Anxiety (mean: 8.91±3.68). Sex, academic level, and Grade Point Average (GPA) did not significantly affect the students’ perceptions of genAI. However, there was a notable direct association between the students’ general anxiety about genAI and elevated scores in the Fear, Anxiety, and Ethics constructs of the FAME scale. Prior exposure to genAI and its previous use did not significantly modify the scores of the FAME scale. These findings highlighted the critical need for refined educational strategies to address the integration of genAI in medical training. The results demonstrated a pervasive anxiety, fear, mistrust, and ethical concerns among medical students regarding the deployment of genAI in healthcare, indicating the necessity for curriculum modifi-cations that focus specifically on these areas. Interventions should be tailored to increase genAI familiarity and competency, which would alleviate apprehension and equip future physicians to engage with this inevitable technology effectively. The study also highlighted the importance of incorporating ethical discussions into medical courses to address mistrust and concerns about the human-centered aspects of genAI. Conclusively, the study calls for a proactive evolution of medical education to prepare students for AI-driven healthcare practices shortly to ensure that physicians are well-prepared, confident, and ethically informed in their professional interactions with genAI technologies.
Keywords: 
Subject: Social Sciences  -   Education

1. Introduction

The widespread availability of generative Artificial Intelligence (genAI) models (e.g., ChatGPT, Gemini, Microsoft Copilot, and Llama) is set to transform various occupational sectors including the higher education sector, especially in medical education and healthcare practice [1,2,3,4,5,6]. For example, genAI can help in the automation of routine administrative and educational tasks such as scheduling, aid in response to student inquiries, and assist in the delivery of basic educational content in an interesting and personalized style [7,8]. Subsequently, the genAI models can help educators and administrators focus on more complex, value-added activities in higher education such as personalized teaching and research activities [9,10].
Additionally, genAI can be extremely helpful in medical education by offering sophisticated, novel simulations and modeling, especially for practical training, which are invaluable in healthcare education [2,3,11,12,13]. Moreover, genAI models can facilitate educational initiatives without substantial additional resources, which helps to promote educational equity at the global level [14].
Several potential benefits for genAI models in healthcare education, research, and practice have been well recognized and characterized [3,11,15]; however, there are growing concerns about the negative implications of these AI models for the workforce, particularly regarding job displacement and the changing nature of health professional roles [16,17,18]. Consequently, these concerns can lead to resistance to AI implementation among health professionals, which can hinder the full realization of genAI advantages in healthcare [19].
Medical students are considered a key group to the future healthcare workforce. Therefore, the ability of medical students to adapt their career paths to emerging technologies such as genAI is essential for them to thrive amid evolving healthcare practice dynamics which inevitably would be shaped by AI integration [20,21].
The rapid adoption of genAI models into the sector of healthcare raised concerns about job security, especially for medical students who are set to enter the healthcare workforce [22,23]. Thus, evaluating medical students’ anxiety and fear regarding potential job displacement by genAI appears a timely topic for research investigation. Understanding the concerns of medical students towards genAI models can offer insights into the effects of this novel technology on health profession identity, future competitiveness, as well as the ethical dilemmas that may arise as a result of genAI models’ integration in healthcare [24,25,26].
The increasing popularity of genAI models’ use in healthcare heightened job displacement concerns with evidence suggesting gaps in practical AI experience and knowledge among the currently practicing health professionals [27,28]. The AI-driven shift in healthcare practice would redirect the focus from traditional patient-centered care to technology-centered methods, raising questions about the need to redefine the future roles of health professionals [29]. Subsequently, the AI-driven changes in healthcare could lead to depersonalization of care, which is a major challenge to the core healthcare values of empathy and human judgment which emphasizes the critical need to explore these evolving dynamics in a comprehensive and evidence-based approach [30,31].
On a related note, ethical considerations are central to medical students’ perceptions of genAI models’ utility in healthcare practice [32,33]. Concerns about patient privacy, data security, and potential AI-induced healthcare disparities illustrate the complex ethical challenges that future physicians will face [34]. Additionally, there are significant questions regarding whether the current medical education practices adequately prepare medical students for an AI-dominated healthcare practice soon [35].
Based on the aforementioned points, the integration of genAI models into healthcare is expected to provoke anxiety among medical students, with subsequent fear of job displacement, loss of professional identity, and ethical dilemmas [36,37,38]. Therefore, investigating medical students’ perspectives on genAI models and their concerns about this emerging technology is important. This area of investigation can be crucial from the medical educational perspective, in order to prepare future physicians for an AI-driven era in healthcare practice and to equip future physicians with the necessary tools to improve patient care and healthcare outcomes [1,39,40,41].
Therefore, this study aimed to assess medical students’ fears, anxiety, and concerns regarding genAI models’ roles in healthcare. The study objectives included assessing the key factors driving this fear and anxiety, with the ultimate aim of providing preliminary evidence to guide the development of targeted AI integration interventions in medical education, policy modifications, and improvements in genAI implementation strategies in medical education. The major aim of this study was to address these genAI-related concerns among medical students, to ensure that future healthcare physicians are prepared and confident in AI-integrated healthcare settings.

2. Materials and Methods

2.1. Study Design and Ethical Permission

This pilot study was based on a cross-sectional survey involving medical students currently studying in Jordan. A convenience sampling strategy was employed to expedite participant recruitment given the timeliness of the study topic. The recruitment of potential medical students was done using social media and instant messaging applications including Facebook, X (formerly Twitter), Instagram, LinkedIn, Messenger, and WhatsApp, all of which are popular among the target demographic, namely medical students in Jordan.
The sampling process started by the authors who were medical students at the time of survey distribution (Y.A., O.A., A.A.-S., Z.A., and A.N.A.), with the encouragement of further distribution of the survey among their acquaintances from medical students in Jordan (snowball sampling). The survey was distributed in Arabic language via Google Forms as the questionnaire host and no incentives were offered for participation. The survey distribution took place during July–August 2024.
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board in the Faculty of Pharmacy at the Applied Science Private University (reference number: 2024-PHA-25) which was granted on 4 July 2024.

2.2. Survey Instrument Development

The initial phase of survey development involved a literature review by the first and senior authors (M.S. and M.B.) independently using Google Scholar and the search concluded on 4 June 2024. The following search terms were used to ensure a comprehensive understanding of the current role of genAI in healthcare and medical education: “generative AI in healthcare education”; “generative AI and anxiety of health professionals”; “generative AI concerns among health professionals”; “fear of generative AI in healthcare”; “healthcare job displacement by generative AI”; “ChatGPT anxiety in healthcare”; “ChatGPT fear in healthcare”; “ChatGPT concerns in healthcare”; “healthcare job displacement by ChatGPT”; “medical job displacement by ChatGPT”; and “AI anxiety among medical students”. This was followed by identification of research records in English that were deemed relevant for the development of a tailored survey instrument for the study objectives by joint discussions among the first and senior authors (M.S. and M.B.) [3,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57].
A collaborative effort involving the first, second, and senior authors was conducted to critically analyze the retrieved literature, leading to the development of a set of survey questions for the study objectives. The first and senior authors have a combined 13 years of teaching experience in healthcare, while the second author is a recently graduated Doctor of Medicine (M.D.). This combination of broad academic expertise and recent practical experience in medical learning was undertaken to enhance the content validity of the survey questions, with subsequent identification and integration of four themes subjectively deemed relevant to cover the study objectives as follows: (1) Fear related to possible job displacement by the availability of efficient, robust, and beneficial genAI; (2) Anxiety regarding the long-term medical career prospects in light of genAI models availability; (3) Mistrust related to concerns of reducing the human role in healthcare practice by genAI, leading to dehumanization in medical interactions and decision-making; and (4) Ethical dilemmas and concerns that may arise if genAI models utilization become integrated as a routine in healthcare practice. Based on the initials of the dimensions, the scale was referred to as the “FAME” scale.
To enhance the validity of the survey, we opted to check the content of the initial draft by seeking feedback from two lecturers involved in medical education: A lecturer in Microbiology and Immunology, specializing in basic medical sciences, and a lecturer in Internal Medicine with a specialty in Endocrinology, with extensive involvement in supervising fourth-year medical students during their introductory course for clinical rotations. This process involved seeking specific feedback on the contextual relevance and comprehensiveness of the included four themes, thereby enhancing the content validity of the novel survey instrument.
Afterward, the survey instrument underwent a pilot test involving five medical students selected to represent a diverse range of perspectives within medical education (three third-year students and two fifth-year students). The feedback required included notes on the clarity, relevance, and language of the survey items. Following the obtained feedback, minor refinements of the survey items were conducted to improve the clarity of the language of survey items by adjustments involving simplifying complex language and enhancing the flow of the questionnaire.

2.3. Final Survey Used

Prior to participation, all medical students were provided with detailed information about the study objectives, inclusion criteria, and confidentiality measures to protect the anonymity of the responses obtained. Electronic consent was mandatory from each participant with “Yes” as a response for the following item necessary for opening of the survey.
The survey started with a demographics section gathering data on age (as a scale variable); sex (male vs. female); current year of study (1st, 2nd, 3rd, 4th, 5th, or 6th year) later classified as pre-clinical (1st, 2nd, or 3rd) vs. clinical (4th, 5th, or 6th year); latest Grade Point Average (GPA) classified as unsatisfactory, satisfactory, good, very good, or excellent later classified as low (unsatisfactory, satisfactory, or good) vs. high (very good or excellent); and the desired future specialty (Pediatrics; General Surgery; Forensic Medicine; Orthopedics; Neurosurgery; Ophthalmology; Plastic Surgery; Internal Medicine; Psychiatry; Emergency Medicine; Obstetrics and Gynecology; Urology; Anesthesiology; Radiology; Pathology; Dermatology; or Others).
Then, the survey included a major question considered as the primary study measure measured on a 4-point Likert scale (Not at all, slightly anxious, somewhat anxious, extremely anxious) to assess the level of anxiety medical students feel towards generative AI technologies, such as ChatGPT. This question was “How anxious are you about genAI models like ChatGPT as a future physician?”.
Next, the participants were asked about their previous use of genAI models (ChatGPT, MS Copilot, Gemini, My AI Snapchat, others).
Finally, the main body of the survey consisted of twelve items assessed on a 5-point Likert scale (strongly agree, agree, neutral, disagree, strongly disagree). These items were designed to as three questions per dimension as follows: (1) I feel concerned that AI will reduce the number of available jobs in healthcare; (2) I am concerned that AI will exceed the need for human skills in many areas of healthcare; (3) The advancement of AI in healthcare makes me anxious about my long-term career; (4) I am worried that I will not be able to compete with AI for jobs in the healthcare; (5) The thought of competing with AI for career opportunities in healthcare makes me feel anxious; (6) I am worried that AI will demand competencies beyond the current scope of medical teaching; (7) I believe that AI is missing the aspects of insight and empathy needed in medical practice; (8) I believe that AI does not take into account the personal and emotional aspects of patient care; (9) I believe that the essence of the medical profession will not be affected by AI technologies; (10) I believe that AI will lead to ethical dilemmas in healthcare; (11) I fear that AI in healthcare will compromise patient privacy and data security; and (12) I worry that AI will increase inequalities in patient care.

2.4. Sample Size Calculation

The required sample size was calculated using G*Power software [58,59], assuming a small to medium effect size of 0.3, with an α level of 0.050, and targeting a power of 95%. Based on these specifications, recruitment of 147 participants was determined to be essential to ensure adequate statistical power for comparison between two groups (medical students who were not anxious at all regarding AI vs. medical students who expressed anxiety at any level).

2.5. Statistical and Data Analysis

Statistical analyses were conducted using IBM SPSS Statistics for Windows, Version 27.0 (Armonk, NY: IBM Corp), with statistical significance established at p < 0.050. The Kolmogorov-Smirnov test was employed to assess data normality. Cronbach’s α was calculated to evaluate the internal consistency of survey constructs. The Intraclass Correlation Coefficient (ICC) was employed to assess the reliability of measurements under a one-way random model given the uniform style of survey administration to ensure that any measurement error would be random and not due to systematic differences in how measurements were taken. For effect size between two groups, Cohen’s d was utilized with Hedges’ correction, which adjusts for bias in the estimation of the standard deviation (SD) in small samples. To take into account non-normality of the scale variables, the effect size analysis was supplemented by measuring point-biserial correlation coefficients using bivariate Pearson correlations, with correlation coefficients (r) acting as surrogates to effect sizes. Nonparametric tests, including the Mann-Whitney U test for two independent samples and the Kruskal-Wallis test for more than two groups, were applied, given that the scale variable did not meet normality assumptions (p ≤ 0.001 using Kolmogorov-Smirnov test). Additionally, Chi-square tests were used to explore associations between categorical variables.
Medical specialties were categorized based on the risk of job displacement by generative AI, factoring in the extent to which each specialty relies on procedural skills, personalized interactions, and automatable tasks and this categorization was agreed upon by the first, second, and senior authors based on the following criteria. High-risk specialties, such as Radiology, Pathology, and Dermatology, involve significant use of diagnostic imaging and pattern recognition that AI could replace. Middle-risk specialties like Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology, could see moderate impacts from AI but retain crucial human elements. Low-risk specialties, including Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery, involve complex decision-making and personalized care that are difficult to automate.

3. Results

3.1. General Features of Participating Medical Students

The final study sample comprised a total of 164 medical students with a mean age of 21.1±2.3 years, with a majority of male students (n = 88, 53.7%), and students at the basic educational level (n = 113, 68.9%, Table 1).
Slightly less than three-quarters of the study participants reported prior use of at least a single genAI model, while slightly more than half of the study participants reported a cumulative GPA of very good or excellent categories (Table 1).
For the genAI models used, the most common reported model was ChatGPT (n = 105, 64.0%), followed by My AI Snapchat (n = 29, 17.7%), Gemini (n = 17, 10.4%), and Copilot (n = 13, 7.9%, Figure 1).

3.2. The Level of Anxiety towards genAI and Its Associated Determinants

Slightly over a third of the study sample reported being not anxious at all regarding the role of genAI such as ChatGPT as future physicians (n = 56, 34.1%), with 61 students who reported being slightly anxious (41.5%), 36 students who reported being somewhat anxious (22.0%), and only four students who reported being extremely anxious (2.4%). The demographic data did not show any statistically significant differences in the level of anxiety regarding genAI by the participants (Table 2).

3.3. FAME Constructs Reliability

The Cronbach’s α for the four FAME sub-scales were as follows: 0.874 for the Fear sub-scale, 0.880 for the Anxiety sub-scale, 0.724 for the Mistrust sub-scale, and 0.695 for the Ethics sub-scale. For the 12 items combined together, the Cronbach’s α was 0.853 reflecting robust internal consistency.
In terms of the ICC, the Fear sub-scale showed high reliability, with an ICC of 0.678 for single measures and 0.863 for average measures. The Anxiety sub-scale also exhibited high reliability, with an ICC of 0.673 for single measures and 0.860 for average measures. The Mistrust sub-scale displayed moderate reliability at 0.405 for single measures, but this increased to good reliability at 0.671 for average measures. Similarly, the Ethics sub-scale showed moderate reliability at 0.388 for single measures and improved to 0.656 for average measures, indicating enhanced reliability and consistency when averaged across respondents. The ICC values for the FAME sub-scales are detailed in (Figure 2).

3.4. FAME Constructs Scores

For the four FAME constructs, the higher level of agreement by the participants was seen in the Mistrust construct (mean: 12.35±2.78), followed by the Ethics construct (mean: 10.86±2.90), Fear construct (mean: 9.49±3.53), while the lowest level of agreement was seen in the Anxiety construct (mean: 8.91±3.68, Figure 3). Statistically significant lower levels of agreement were seen among the participating medical students who were not anxious at all compared to those who showed any level of anxiety as future physicians towards genAI for three constructs, namely Fear, Anxiety, and Ethics (Figure 4).

3.5. Determinants of Anixety to genAI among the Participanting Medical Students

Table 3 summarizes the determinants of Fear, Anxiety, Mistrust, and Ethics among the participating medical students towards genAI. No significant differences in Fear, Anxiety, Mistrust, or Ethics scores were found across sexes, academic levels—basic and clinical—, GPA categories; however, lower Anxiety scores were marginally significant (p = 0.082) among students with higher academic achievement reflected in a higher GPA.
When analyzed by desired specialty, no significant differences emerged among students aspiring to low, middle, or high-risk specialties, indicating uniform perceptions across different desired fields. The number of genAI models used by the students also did not significantly influence Fear, Anxiety, Mistrust, or Ethics scores, suggesting a consistent perception regardless of exposure level to genAI tools.
Importantly, students who reported not being at all anxious about genAI models like ChatGPT had significantly lower fear and anxiety scores (p < 0.001 for both), lower ethics score (p = 0.014) compared to the students who expressed any level of anxiety towards genAI (Table 3). However, this statistically significant difference did not extend to the Mistrust sub-scale between the two groups (p = 0.590).
In the analysis to assess the impact of anxiety towards generative AI models on the perceptions of fear, anxiety, and ethics, significant effects were observed, as indicated by substantial effect sizes calculated using Cohen’s d and Hedges’ g. Specifically, for the Fear construct, Cohen’s d yielded a point estimate of 2.332 with a 95% confidence interval (CI) of 2.035 to 2.627, indicating a very large effect size (Pearson r = 0.411, p < 0.001). Similarly, Hedges’ correction resulted in a point estimate of 2.327 (95% CI: 2.031–2.621). For the Anxiety construct, Cohen’s d provided a point estimate of 2.038 (95% CI: 1.768–2.306), and Hedges’ correction gave a point estimate of 2.033 (95% CI: 1.764–2.300, Pearson r = 0.319, p < 0.001). For the Mistrust construct, Cohen’s d provided a point estimate of 3.831 (95% CI: 3.387– 4.272), and Hedges’ correction gave a point estimate of 3.822 (95% CI: 3.379–4.263, Pearson r = 0.058, p = 0.462). Lastly, for the Ethics construct, Cohen’s d revealed a point estimate of 3.243 (95% CI: 2.858–3.625), with Hedges’ correction closely aligning at a point estimate of 3.235 (95% CI: 2.852–3.617, Pearson r = 0.205, p = 0.008).

4. Discussion

The analysis of fear, anxiety, mistrust, and ethical concerns among medical students in this study regarding genAI models revealed helpful insights. These insights highlighted the psychological and ethical dimensions that would influence the adoption of emerging technologies such as genAI models in medical education.
Our study revealed significant concerns among medical students in Jordan regarding the implications of genAI models for their future careers. Surveying 164 students, we found that a substantial majority, over 72%, have already been using genAI models, particularly ChatGPT. This high usage rate was consistent with global trends suggesting that reliance on genAI models for academic support is becoming increasingly normalized. For example, a multi-national study involving participants from Brazil, India, Japan, the UK, and the USA highlighted that most students use ChatGPT for assistance with assignments and expect their peers to do the same, signaling a shift towards widespread acceptance of genAI tools in academic settings [60].
Moreover, a comprehensive study across several Arab countries—including Iraq, Kuwait, Egypt, Lebanon, and Jordan—engaged 2,240 participants, and revealed that nearly half were aware of ChatGPT, and over half of them had used it before the study [61]. The favorable disposition towards genAI exemplified by ChatGPT was influenced by its ease of use, positive technological attitudes, social influence, perceived usefulness, and minimal perceived risks and anxiety [61]. Similarly, a recent study from the United Arab Emirates (UAE) supports these findings, with the majority of students reporting routine use of ChatGPT, driven by its utility, ease of use, and the positive influence of social and cognitive factors [62]. Furthermore, a recent study among medical students in the U.S. showed that almost half of the surveyed students indicated the use of ChatGPT in medical studies [63]. Taken together, our findings along with those of recent studies collectively highlight a broader acceptance and integration of genAI models among university students, shaped by genAI models’ utility, ease of integration into daily tasks, and the broader, positive social perception of technological engagement [64,65,66,67].
In this study, approximately two-thirds of medical students reported experiencing at least a mild level of anxiety about genAI. This anxiety was notably pervasive across various demographics, including sex, academic level, and GPA, reflecting a widespread apprehension about genAI among participants. This universal concern is understandable given the broader levels of apprehensions about genAI observed among university students globally. For example, a recent study that was conducted in Hong Kong involving 399 students across six universities and ten faculties revealed significant concerns [67]. Students feared that genAI could undermine the value of their university education and impede the development of essential skills such as teamwork, problem-solving, and leadership [67]. Additionally, a study among medical students in the UAE reported that a majority of participants were worried that AI would reduce trust in physicians besides worries regarding the ethical impact of AI in healthcare [21]. Moreover, a study among health students in Jordan based on the technology acceptance model (TAM) revealed an overall anxious attitude towards ChatGPT among the participants [51]. This highlights the need for educational strategies that effectively integrate genAI into medical curricula while addressing the underlying justifiable anxiety among medical students.
Despite the notable anxiety reported in this study, the findings revealed even more pronounced levels of mistrust and ethical concerns among the participating medical students. Interestingly, while a related study among college students in Japan found a significant focus on unemployment as a major ethical issue with AI [68], our findings suggest that concerns extend beyond personal job security to include broader ethical and trust issues associated with genAI applications in healthcare.
Of note, the study findings elucidated the significant psychological and ethical impacts of anxiety toward genAI models on medical students, revealing profound concerns across fear, anxiety, mistrust, and ethical considerations. The strong link to the fear construct illustrated how apprehensions about genAI correlated with both general anxiety and specific fears concerning the future of medical practice and job security—a common trend observed with the introduction of new technologies [69,70,71,72]. These anxieties are likely fueled by uncertainties over how genAI might transform traditional medical roles, potentially replacing tasks currently undertaken by humans, thus sparking fears of job displacement and the diminution of human-centric skills in healthcare [73].
The fear of job displacement by genAI is not unique to the healthcare sector as it resonates across various other occupational sectors. For example, studies in fields ranging from accounting to manufacturing have identified a correlation between the rise of AI and increased job displacement concerns, with policy recommendations often advocating for talent retention, investment in upskilling programs, and support mechanisms for those adversely affected by AI adoption [74]. In Germany, a manufacturing sector survey indicated that employee fears regarding AI are among the top barriers to its adoption, with non-managerial staff particularly expressing apprehension about AI implications for job security and workplace dynamics [75]. A study from Turkey revealed that while teacher candidates across various disciplines, ages, and genders show no apprehension about learning AI, they do express significant anxiety about its potential effects on employment and social dynamics [76].
In this study, concerns among medical students about job displacement, while significant, were overshadowed by issues of ethics and mistrust. These findings reflected apprehensions about the ethical and empathetic dimensions of care—areas where AI is often perceived as lacking, as noted by Farhud and Zokaei [36], despite the presence of recent evidence contradicting this viewpoint, including the promising potential of clinically oriented genAI models [77,78]. The pronounced levels of mistrust and ethical concerns in this study may indicate that medical students fear not only the potential job displacement but also doubt the capacity of genAI to fulfill crucial humanistic aspects of healthcare, such as empathy and ethical judgment [79]. Our findings support this perspective, with the mistrust construct receiving the highest level of agreement among participants. This skepticism is deeply rooted in doubts about genAI’s ability to effectively handle the complex aspects of empathy and ethical decision-making, as perceived by the medical students involved in our study.

4.1. Recommendations Based on the Study Findings

The findings of this study highlighted the critical need for evolving medical curricula that incorporate comprehensive AI coverage including genAI training [80,81,82]. This modification is recommended to illuminate the technical capabilities of AI and to clarify its role in supplementing rather than replacing the role of human physicians [3,83]. This involves emphasizing the genAI potential to enhance diagnostic precision, personalize treatment plans, and improve administrative efficiency which has been shown thoroughly in recent literature [3,6,11,84,85].
To address the considerable ethical concerns and mistrust regarding genAI among medical students, it is important to encourage ethical discussions on AI usage, data privacy, and patient-centered care in the medical training framework [86,87]. Incorporating role-playing, case studies, and ethical debates will help students competently train on the intricate moral issues they will encounter in their professional lives [40]. An important study by D’Souza et al. outlined twelve critical tips for addressing the major ethical concerns associated with the use of AI in medical education [88].
Moreover, as AI automates many technical tasks, enhancing uniquely human soft skills like emotional intelligence, communication, leadership, and adaptability becomes crucial [89]. By promoting AI as a collaborative tool in healthcare rather than a competitor and highlighting examples where AI and human physicians synergistically improve patient outcomes, AI would be viewed as an indispensable partner in healthcare [90].
Additionally, advocating for policies that protect healthcare workers’ job security in the wake of AI integration is important [22,73,91]. Clear guidelines on AI’s role in healthcare will ensure that it supports rather than replaces medical professionals [92,93]. Ongoing research into AI’s impacts, coupled with open dialogues among AI developers, healthcare professionals, educators, and policymakers, will adapt strategies to ensure AI enhances rather than disrupts healthcare services [1,94].
This study advocates for medical curricula to thoroughly prepare future healthcare providers to integrate AI into their practices effectively, ensuring they deliver compassionate, competent, and ethically sound health care [95].

4.2. Study Limitations

The interpretation of this study results must be approached with caution due to several limitations as follows. First, the cross-sectional survey design prevented establishing causality between medical students’ perceptions of genAI and the other study variables. Longitudinal studies are needed for tracking changes in students’ perceptions of AI as genAI technology progresses rapidly.
Second, the study relied on convenience and snowball sampling approaches for swift data collection which are expected to introduce a notable sampling bias. These methods depend heavily on existing social networks and the participant’s willingness to engage, potentially misrepresenting the broader medical student population in Jordan and beyond. Consequently, the results may not be generalizable to all medical students or to other demographic groups due to non-random sampling used and the inherent selection bias, including a possible over-representation of students more familiar with genAI models.
Third, using social media and instant messaging platforms for student recruitment likely biased the study sample toward students who hold specific views on technology that may not reflect the broader medical students’ perspectives. Distributing the survey solely in Arabic further limited the diversity of responses, potentially impacting the depth of insights into how students’ perceptions vary across different cultural or educational backgrounds.
Finally, while the literature review for constructing the survey instrument was thorough, the selection of sources and subsequent survey questions may have been influenced by our subjective biases, shaped by our backgrounds and personal experiences in healthcare education. This subjective approach might have resulted in overlooking other relevant themes or emerging trends on genAI concerns that were unrecognized. Thus, further testing and validation of the survey instrument used in this study is strongly recommended in future studies.

5. Conclusions

This study illustrated that while medical students are anxious about the impact of genAI on their future job prospects, their deeper concerns revolve around the ethical and trust-related implications of genAI in the medical profession. These findings necessitate an evolution in medical education, advocating for the integration of AI discussions within medical curricula to enhance students’ understanding of how AI can complement human capabilities in healthcare.
To engage with genAI technologies both effectively and ethically, it is important that educational institutions provide comprehensive AI training to equip future physicians with the needed AI skills. The pervasive anxiety, fear, mistrust, and ethical concerns expressed by medical students in this study highlighted the need for curriculum modifications that increase AI familiarity and competency and address the ethical issues of genAI in healthcare. Specifically, medical courses should integrate ethical discussions to tackle mistrust and address concerns about the human-centric aspects of genAI. The results call for a proactive transformation of medical education, preparing students for AI-driven healthcare practices. By doing so, future physicians would be technically skilled, confident, and ethically informed. This comprehensive approach will help alleviate fears and build a foundation for effective and ethical engagement with genAI technologies in medical practice.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org

Author Contributions

Conceptualization, M.S.; methodology, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; software, M.S.; validation, M.S. and M.B.; formal analysis, M.S..; investigation, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; resources, M.S.; data curation, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; writing—original draft preparation, M.S.; writing—review and editing, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; visualization, M.S.; supervision, M.S.; project administration, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board in the Faculty of Pharmacy at the Applied Science Private University (reference number: 2024-PHA-25).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

We are deeply thankful for Hiba Abbasi and Khaled Al-Salahat for their feedback on the content of the initial draft of the developed survey instrument.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AI Artificial Intelligence
CI Confidence interval
FAME Fear, Anxiety, Mistrust, and Ethics
genAI generative Artificial Intelligence
GPA Grade Point Average
ICC Intraclass Correlation Coefficient
SD Standard deviation
TAM Technology acceptance model
UAE United Arab Emirates

References

  1. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education 2023, 23, 689. [Google Scholar] [CrossRef]
  2. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J 2023, 3, e103. [Google Scholar] [CrossRef]
  3. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
  4. Hashmi, N.; Bal, A.S. Generative AI in higher education and beyond. Business Horizons 2024, In Press. [Google Scholar] [CrossRef]
  5. Lee, D.; Arnold, M.; Srivastava, A.; Plastow, K.; Strelan, P.; Ploeckl, F.; Lekkas, D.; Palmer, E. The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Computers and Education: Artificial Intelligence 2024, 6, 100221. [Google Scholar] [CrossRef]
  6. Yilmaz Muluk, S.; Olcucu, N. The Role of Artificial Intelligence in the Primary Prevention of Common Musculoskeletal Diseases. Cureus 2024, 16, e65372. [Google Scholar] [CrossRef]
  7. Sheikh Faisal, R.; Nghia, D.-T.; Niels, P. Generative AI in Education: Technical Foundations, Applications, and Challenges. In Artificial Intelligence for Quality Education, Dr. Seifedine, K., Ed.; IntechOpen: Rijeka, 2024; p. Ch. 2. [Google Scholar] [CrossRef]
  8. Acar, O.A. Commentary: Reimagining marketing education in the age of generative AI. International Journal of Research in Marketing 2024, In Press. [Google Scholar] [CrossRef]
  9. Chiu, T.K.F. The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interactive Learning Environments 2023, 1–17. [Google Scholar] [CrossRef]
  10. Barakat, M.; Salim, N.A.; Sallam, M. Perspectives of University Educators Regarding ChatGPT: A Validation Study Based on the Technology Acceptance Model. Research Square 2024, PREPRINT, 1–25. [Google Scholar] [CrossRef]
  11. Sallam, M.; Al-Farajat, A.; Egger, J. Envisioning the Future of ChatGPT in Healthcare: Insights and Recommendations from a Systematic Identification of Influential Research and a Call for Papers. Jordan Medical Journal 2024, 58, 236–249. [Google Scholar] [CrossRef]
  12. Mijwil, M.; Abotaleb, M.; Guma, A.L.I.; Dhoska, K. Assigning Medical Professionals: ChatGPT’s Contributions to Medical Education and Health Prediction. Mesopotamian Journal of Artificial Intelligence in Healthcare 2024, 2024, 76–83. [Google Scholar] [CrossRef]
  13. Roos, J.; Kasapovic, A.; Jansen, T.; Kaczmarczyk, R. Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany. JMIR Med Educ 2023, 9, e46482. [Google Scholar] [CrossRef]
  14. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education 2023, 21, 100790. [Google Scholar] [CrossRef]
  15. Safranek, C.W.; Sidamon-Eristoff, A.E.; Gilson, A.; Chartash, D. The Role of Large Language Models in Medical Education: Applications and Implications. JMIR Med Educ 2023, 9, e50945. [Google Scholar] [CrossRef]
  16. Wani, S.U.D.; Khan, N.A.; Thakur, G.; Gautam, S.P.; Ali, M.; Alam, P.; Alshehri, S.; Ghoneim, M.M.; Shakeel, F. Utilization of Artificial Intelligence in Disease Prevention: Diagnosis, Treatment, and Implications for the Healthcare Workforce. Healthcare 2022, 10, 608. [Google Scholar] [CrossRef]
  17. Howard, J. Artificial intelligence: Implications for the future of work. American Journal of Industrial Medicine 2019, 62, 917–926. [Google Scholar] [CrossRef]
  18. George, A.S.; George, A.S.H.; Martin, A.S.G. ChatGPT and the Future of Work: A Comprehensive Analysis of AI’s Impact on Jobs and Employment. Partners Universal International Innovation Journal 2023, 1, 154–186. [Google Scholar] [CrossRef]
  19. Yang, Y.; Ngai, E.W.T.; Wang, L. Resistance to artificial intelligence in health care: Literature review, conceptual framework, and research agenda. Information & Management 2024, 61, 103961. [Google Scholar] [CrossRef]
  20. Stoumpos, A.I.; Kitsios, F.; Talias, M.A. Digital Transformation in Healthcare: Technology Acceptance and Its Applications. Int J Environ Res Public Health 2023, 20, 3407. [Google Scholar] [CrossRef] [PubMed]
  21. Alkhaaldi, S.M.I.; Kassab, C.H.; Dimassi, Z.; Oyoun Alsoud, L.; Al Fahim, M.; Al Hageh, C.; Ibrahim, H. Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study. JMIR Med Educ 2023, 9, e51302. [Google Scholar] [CrossRef] [PubMed]
  22. Bekbolatova, M.; Mayer, J.; Ong, C.W.; Toma, M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare 2024, 12, 125. [Google Scholar] [CrossRef] [PubMed]
  23. Bohr, A.; Memarzadeh, K. Chapter 2—The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Bohr, A., Memarzadeh, K., Eds.; Academic Press, 2020; pp. 25–60. [Google Scholar] [CrossRef]
  24. Rony, M.K.K.; Kayesh, I.; Bala, S.D.; Akter, F.; Parvin, M.R. Artificial intelligence in future nursing care: Exploring perspectives of nursing professionals—A descriptive qualitative study. Heliyon 2024, 10, e25718. [Google Scholar] [CrossRef] [PubMed]
  25. Weidener, L.; Fischer, M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024, 3, e51204. [Google Scholar] [CrossRef]
  26. Rahimzadeh, V.; Kostick-Quenet, K.; Blumenthal Barby, J.; McGuire, A.L. Ethics Education for Healthcare Professionals in the Era of ChatGPT and Other Large Language Models: Do We Still Need It? The American Journal of Bioethics 2023, 23, 17–27. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, M.; Zhang, B.; Cai, Z.; Seery, S.; Gonzalez, M.J.; Ali, N.M.; Ren, R.; Qiao, Y.; Xue, P.; Jiang, Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front Med (Lausanne) 2022, 9, 990604. [Google Scholar] [CrossRef] [PubMed]
  28. Fazakarley, C.A.; Breen, M.; Leeson, P.; Thompson, B.; Williamson, V. Experiences of using artificial intelligence in healthcare: a qualitative study of UK clinician and key stakeholder perspectives. BMJ Open 2023, 13, e076950. [Google Scholar] [CrossRef]
  29. Zhang, P.; Kamel Boulos, M.N. Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges. Future Internet 2023, 15, 286. [Google Scholar] [CrossRef]
  30. Kerasidou, A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ 2020, 98, 245–250. [Google Scholar] [CrossRef]
  31. Adigwe, O.P.; Onavbavba, G.; Sanyaolu, S.E. Exploring the matrix: knowledge, perceptions and prospects of artificial intelligence and machine learning in Nigerian healthcare. Front Artif Intell 2023, 6, 1293297. [Google Scholar] [CrossRef]
  32. Alghamdi, S.A.; Alashban, Y. Medical science students’ attitudes and perceptions of artificial intelligence in healthcare: A national study conducted in Saudi Arabia. Journal of Radiation Research and Applied Sciences 2024, 17, 100815. [Google Scholar] [CrossRef]
  33. Bala, I.; Pindoo, I.; Mijwil, M.; Abotaleb, M.; Yundong, W. Ensuring Security and Privacy in Healthcare Systems: A Review Exploring Challenges, Solutions, Future Trends, and the Practical Applications of Artificial Intelligence. Jordan Medical Journal 2024, 2024. [Google Scholar]
  34. Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef] [PubMed]
  35. Buabbas, A.J.; Miskin, B.; Alnaqi, A.A.; Ayed, A.K.; Shehab, A.A.; Syed-Abdul, S.; Uddin, M. Investigating Students’ Perceptions towards Artificial Intelligence in Medical Education. Healthcare 2023, 11, 1298. [Google Scholar] [CrossRef] [PubMed]
  36. Farhud, D.D.; Zokaei, S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran J Public Health 2021, 50, i. [Google Scholar] [CrossRef]
  37. Kim, J.; Kadkol, S.; Solomon, I.; Yeh, H.; Soh, J.; Nguyen, T.; Choi, J.; Lee, S.; Srivatsa, A.; Nahass, G.; et al. AI Anxiety: A Comprehensive Analysis of Psychological Factors and Interventions. SSRN Electronic Journal 2023. [Google Scholar] [CrossRef]
  38. Oniani, D.; Hilsman, J.; Peng, Y.; Poropatich, R.K.; Pamplin, J.C.; Legault, G.L.; Wang, Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digit Med 2023, 6, 225. [Google Scholar] [CrossRef]
  39. Sallam, M.; Al-Farajat, A.; Egger, J. Envisioning the Future of ChatGPT in Healthcare: Insights and Recommendations from a Systematic Identification of Influential Research and a Call for Papers. Jordan Medical Journal 2024, 58. [Google Scholar] [CrossRef]
  40. Dave, M.; Patel, N. Artificial intelligence in healthcare and education. Br Dent J 2023, 234, 761–764. [Google Scholar] [CrossRef]
  41. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Education Sciences 2023, 13, 692. [Google Scholar] [CrossRef]
  42. Preiksaitis, C.; Rose, C. Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review. JMIR Med Educ 2023, 9, e48785. [Google Scholar] [CrossRef]
  43. Karabacak, M.; Ozkara, B.B.; Margetis, K.; Wintermark, M.; Bisdas, S. The Advent of Generative Language Models in Medical Education. JMIR Med Educ 2023, 9, e48163. [Google Scholar] [CrossRef] [PubMed]
  44. Caporusso, N. Generative artificial intelligence and the emergence of creative displacement anxiety. Research Directs in Psychology and Behavior 2023, 3, 1–12. [Google Scholar] [CrossRef]
  45. Woodruff, A.; Shelby, R.; Kelley, P.G.; Rousso-Schindler, S.; Smith-Loud, J.; Wilcox, L. How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries. In Proceedings of the CHI Conference on Human Factors in Computing Systems; Honolulu, HI, USA, 2024, p. Article 641.
  46. Oniani, D.; Hilsman, J.; Peng, Y.; Poropatich, R.K.; Pamplin, J.C.; Legault, G.L.; Wang, Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. npj Digital Medicine 2023, 6, 225. [Google Scholar] [CrossRef] [PubMed]
  47. Meskó, B.; Topol, E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digital Medicine 2023, 6, 120. [Google Scholar] [CrossRef] [PubMed]
  48. Ansari, A.; Ansari, A. Consequences of AI Induced Job Displacement. International Journal of Business, Analytics, and Technology 2024, 2, 4–19, Available from: https://ijbat.com/index.php/IJBAT/article/view/18/31. [Google Scholar]
  49. Ooi, K.-B.; Tan, G.W.-H.; Al-Emran, M.; Al-Sharafi, M.A.; Capatina, A.; Chakraborty, A.; Dwivedi, Y.K.; Huang, T.-L.; Kar, A.K.; Lee, V.-H.; et al. The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions. Journal of Computer Information Systems 1–32. [CrossRef]
  50. Hosseini, M.; Gao, C.A.; Liebovitz, D.M.; Carvalho, A.M.; Ahmad, F.S.; Luo, Y.; MacDonald, N.; Holmes, K.L.; Kho, A. An exploratory survey about using ChatGPT in education, healthcare, and research. PLoS One 2023, 18, e0292216. [Google Scholar] [CrossRef] [PubMed]
  51. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Mahzoum, K.; Al-Tammemi, A.B.; Malaeb, D.; Hallit, R.; Hallit, S. Assessing Health Students’ Attitudes and Usage of ChatGPT in Jordan: Validation Study. JMIR Med Educ 2023, 9, e48254. [Google Scholar] [CrossRef]
  52. Alanzi, T.M. Impact of ChatGPT on Teleconsultants in Healthcare: Perceptions of Healthcare Experts in Saudi Arabia. Journal of Multidisciplinary Healthcare 2023, 16, 2309–2321. [Google Scholar] [CrossRef]
  53. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 2023, 25, e48009. [Google Scholar] [CrossRef] [PubMed]
  54. Javaid, M.; Haleem, A.; Singh, R.P. ChatGPT for healthcare services: An emerging stage for an innovative perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations 2023, 3, 100105. [Google Scholar] [CrossRef]
  55. Zaman, M. ChatGPT for healthcare sector: SWOT analysis. International Journal of Research in Industrial Engineering 2023, 12, 221–233. [Google Scholar] [CrossRef]
  56. Özbek Güven, G.; Yilmaz, Ş.; Inceoğlu, F. Determining medical students’ anxiety and readiness levels about artificial intelligence. Heliyon 2024, 10, e25894. [Google Scholar] [CrossRef] [PubMed]
  57. Saif, N.; Khan, S.U.; Shaheen, I.; Alotaibi, F.A.; Alnfiai, M.M.; Arif, M. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior 2024, 154, 108097. [Google Scholar] [CrossRef]
  58. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  59. Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef]
  60. Ibrahim, H.; Liu, F.; Asim, R.; Battu, B.; Benabderrahmane, S.; Alhafni, B.; Adnan, W.; Alhanai, T.; AlShebli, B.; Baghdadi, R.; et al. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Sci Rep 2023, 13, 12187. [Google Scholar] [CrossRef]
  61. Abdaljaleel, M.; Barakat, M.; Alsanafi, M.; Salim, N.A.; Abazid, H.; Malaeb, D.; Mohammed, A.H.; Hassan, B.A.R.; Wayyes, A.M.; Farhan, S.S.; et al. A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Scientific Reports 2024, 14, 1983. [Google Scholar] [CrossRef]
  62. Sallam, M.; Elsayed, W.; Al-Shorbagy, M.; Barakat, M.; El Khatib, S.; Ghach, W.; Alwan, N.; Hallit, S.; Malaeb, D. ChatGPT usage and attitudes are driven by perceptions of usefulness, ease of use, risks, and psycho-social impact: a study among university students in the UAE. Frontiers in Education 2024, 9, 1414758. [Google Scholar]
  63. Zhang, J.S.; Yoon, C.; Williams, D.K.A.; Pinkas, A. Exploring the Usage of ChatGPT Among Medical Students in the United States. Journal of Medical Education and Curricular Development 2024, 11, 23821205241264695. [Google Scholar] [CrossRef]
  64. Yusuf, A.; Pervin, N.; Román-González, M.; Noor, N.M. Generative AI in education and research: A systematic mapping review. Review of Education 2024, 12, e3489. [Google Scholar] [CrossRef]
  65. Raman, R.; Mandal, S.; Das, P.; Kaur, T.; Sanjanasri, J.P.; Nedungadi, P. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Human Behavior and Emerging Technologies 2024, 2024, 3085910. [Google Scholar] [CrossRef]
  66. Almogren, A.S.; Al-Rahmi, W.M.; Dahri, N.A. Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon 2024, 10, e31887. [Google Scholar] [CrossRef]
  67. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education 2023, 20, 43. [Google Scholar] [CrossRef]
  68. Ghotbi, N.; Ho, M.T.; Mantello, P. Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & SOCIETY 2022, 37, 283–290. [Google Scholar] [CrossRef]
  69. McClure, P.K. “You’re Fired,” Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment. Social Science Computer Review 2017, 36, 139–156. [Google Scholar] [CrossRef]
  70. Erebak, S.; Turgut, T. Anxiety about the speed of technological development: Effects on job insecurity, time estimation, and automation level preference. The Journal of High Technology Management Research 2021, 32, 100419. [Google Scholar] [CrossRef]
  71. Nam, T. Technology usage, expected job sustainability, and perceived job insecurity. Technological Forecasting and Social Change 2019, 138, 155–165. [Google Scholar] [CrossRef]
  72. Koo, B.; Curtis, C.; Ryan, B. Examining the impact of artificial intelligence on hotel employees through job insecurity perspectives. International Journal of Hospitality Management 2021, 95, 102763. [Google Scholar] [CrossRef]
  73. Rony, M.K.K.; Parvin, M.R.; Wahiduzzaman, M.; Debnath, M.; Bala, S.D.; Kayesh, I. “I Wonder if my Years of Training and Expertise Will be Devalued by Machines”: Concerns About the Replacement of Medical Professionals by Artificial Intelligence. SAGE Open Nurs 2024, 10, 23779608241245220. [Google Scholar] [CrossRef]
  74. Rawashdeh, A. The consequences of artificial intelligence: an investigation into the impact of AI on job displacement in accounting. Journal of Science and Technology Policy Management 2023, ahead-of-print. [Google Scholar] [CrossRef]
  75. Link, J.; Stowasser, S. Negative Emotions Towards Artificial Intelligence in the Workplace—Motivation and Method for Designing Demonstrators. In Proceedings of the Artificial Intelligence in HCI, Cham, Switzerland; 2024; pp. 75–86. [Google Scholar]
  76. Hopcan, S.; Türkmen, G.; Polat, E. Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates. Education and Information Technologies 2024, 29, 7281–7301. [Google Scholar] [CrossRef]
  77. Maida, E.; Moccia, M.; Palladino, R.; Borriello, G.; Affinito, G.; Clerico, M.; Repice, A.M.; Di Sapio, A.; Iodice, R.; Spiezia, A.L.; et al. ChatGPT vs. neurologists: a cross-sectional study investigating preference, satisfaction ratings and perceived empathy in responses among people living with multiple sclerosis. J Neurol 2024, 271, 4057–4066. [Google Scholar] [CrossRef] [PubMed]
  78. Bragazzi, N.L.; Garbarino, S. Toward Clinical Generative AI: Conceptual Framework. Jmir ai 2024, 3, e55957. [Google Scholar] [CrossRef] [PubMed]
  79. Tucci, V.; Saary, J.; Doyle, T.E. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. Journal of Medical Artificial Intelligence 2021, 5. [Google Scholar] [CrossRef]
  80. Ogunleye, B.; Zakariyyah, K.I.; Ajao, O.; Olayinka, O.; Sharma, H. A Systematic Review of Generative AI for Teaching and Learning Practice. Education Sciences 2024, 14, 636. [Google Scholar] [CrossRef]
  81. Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med Educ 2019, 5, e16048. [Google Scholar] [CrossRef] [PubMed]
  82. Ngo, B.; Nguyen, D.; vanSonnenberg, E. The Cases for and against Artificial Intelligence in the Medical School Curriculum. Radiol Artif Intell 2022, 4, e220074. [Google Scholar] [CrossRef]
  83. Sauerbrei, A.; Kerasidou, A.; Lucivero, F.; Hallowell, N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak 2023, 23, 73. [Google Scholar] [CrossRef]
  84. Yim, D.; Khuntia, J.; Parameswaran, V.; Meyers, A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024, 12, e52073. [Google Scholar] [CrossRef]
  85. Sallam, M. Bibliometric top ten healthcare-related ChatGPT publications in the first ChatGPT anniversary. Narra J 2024, 4, e917. [Google Scholar] [CrossRef]
  86. Tang, L.; Li, J.; Fantus, S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023, 9, 20552076231186064. [Google Scholar] [CrossRef]
  87. Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine 2022, 296, 114782. [Google Scholar] [CrossRef]
  88. Franco D’Souza, R.; Mathew, M.; Mishra, V.; Surapaneni, K.M. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Med Educ Online 2024, 29, 2330250. [Google Scholar] [CrossRef]
  89. Zirar, A.; Ali, S.I.; Islam, N. Worker and workplace Artificial Intelligence (AI) coexistence: Emerging themes and research agenda. Technovation 2023, 124, 102747. [Google Scholar] [CrossRef]
  90. Shuaib, A. Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward. Int J Gen Med 2024, 17, 1765–1771. [Google Scholar] [CrossRef] [PubMed]
  91. Khan, B.; Fatima, H.; Qureshi, A.; Kumar, S.; Hanan, A.; Hussain, J.; Abdullah, S. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomed Mater Devices 2023, 1–8. [Google Scholar] [CrossRef]
  92. Reddy, S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implementation Science 2024, 19, 27. [Google Scholar] [CrossRef]
  93. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J 2021, 8, e188–e194. [Google Scholar] [CrossRef]
  94. Maleki Varnosfaderani, S.; Forouzanfar, M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering 2024, 11, 337. [Google Scholar] [CrossRef] [PubMed]
  95. Naqvi, W.M.; Sundus, H.; Mishra, G.; Muthukrishnan, R.; Kandakurti, P.K. AI in Medical Education Curriculum: The Future of Healthcare Learning. European Journal of Therapeutics 2024, 30, e23–e25. [Google Scholar] [CrossRef]
Figure 1. The generative artificial intelligence (genAI) models used self-reported by the study participants.
Figure 1. The generative artificial intelligence (genAI) models used self-reported by the study participants.
Preprints 115401 g001
Figure 2. The Intraclass Correlation Coefficient (ICC) for the four FAME sub-scales’ items.
Figure 2. The Intraclass Correlation Coefficient (ICC) for the four FAME sub-scales’ items.
Preprints 115401 g002
Figure 3. Whisker plots for the distribution of the four FAME (Fear, Anxiety, Mistrust, and Ethics) constructs scores.
Figure 3. Whisker plots for the distribution of the four FAME (Fear, Anxiety, Mistrust, and Ethics) constructs scores.
Preprints 115401 g003
Figure 4. Error bars showing the four FAME constructs scores stratified per the level of anxiety of participating medical students towards generative artificial intelligence (genAI). CI: Confidence interval for the mean.
Figure 4. Error bars showing the four FAME constructs scores stratified per the level of anxiety of participating medical students towards generative artificial intelligence (genAI). CI: Confidence interval for the mean.
Preprints 115401 g004
Table 1. General features of participating medical students (N = 164).
Table 1. General features of participating medical students (N = 164).
Variable Category Count Percentage
Sex Male 88 53.7%
Female 76 46.3%
Academic year First year 25 15.2%
Second year 52 31.7%
Third year 36 22.0%
Fourth year 20 12.2%
Fifth year 19 11.6%
Sixth year 12 7.3%
GPA 1 Unsatisfactory 8 4.9%
Satisfactory 15 9.1%
Good 57 34.8%
Very good 65 39.6%
Excellent 19 11.6%
Desired specialty classification based on the risk of job loss due to genAI 2 Low risk 3 68 51.5%
Middle risk 4 48 36.4%
High risk 5 16 12.1%
How anxious are you about genAI models, like ChatGPT as a future physician? Not at all 56 34.1%
Slightly anxious 68 41.5%
Somewhat anxious 36 22.0%
Extremely anxious 4 2.4%
Number of genAI models used 0 45 27.4%
1 77 47.0%
2 33 20.1%
3 5 3.0%
4 4 2.4%
1 GPA: Grade Point Average; 2 genAI: Generative Artificial Intelligence; 3 Low risk specialties: Pediatrics; General Surgery; Forensic Medicine; Orthopedics; Neurosurgery; Ophthalmology; or Plastic Surgery; 4 Middle risk specialties: Internal Medicine; Psychiatry; Emergency Medicine; Obstetrics and Gynecology; Urology; or Anesthesiology; 5 High risk specialties: Radiology; Pathology; or Dermatology.
Table 2. Anxiety of the participating medical students regarding generative artificial intelligence (genAI) models as future physicians.
Table 2. Anxiety of the participating medical students regarding generative artificial intelligence (genAI) models as future physicians.
Variable Category How anxious are you about genAI models, like ChatGPT as a future physician? p value
Not at all Slightly anxious, somewhat anxious, or extremely anxious
Count (%) Count (%)
Age Mean±SD 2 21.66±3.12 20.8±1.61 0.186
Sex Male 29 (33.0) 59 (67.0) 0.729
Female 27 (35.5) 49 (64.5)
Level Basic 36 (31.9) 77 (68.1) 0.358
Clinical 20 (39.2) 31 (60.8)
GPA 1 Unsatisfactory, satisfactory, good 26 (32.5) 54 (67.5) 0.664
Very good, excellent 30 (35.7) 54 (64.3)
Desired specialty Low risk 3 19 (27.9) 49 (72.1) 0.504
Middle risk 4 18 (37.5) 30 (62.5)
High risk 5 6 (37.5) 10 (62.5)
Number of genAI models used 0 14 (31.1) 31 (68.9) 0.895
1 28 (36.4) 49 (63.6)
2 10 (30.3) 23 (69.7)
3 2 (40.0) 3 (60.0)
4 2 (50.0) 2 (50.0)
1 GPA: Grade Point Average; 2 SD: Standard deviation; 3 Low risk specialties: Pediatrics; General Surgery; Forensic Medicine; Orthopedics; Neurosurgery; Ophthalmology; or Plastic Surgery; 4 Middle risk specialties: Internal Medicine; Psychiatry; Emergency Medicine; Obstetrics and Gynecology; Urology; or Anesthesiology; 5 High risk specialties: Radiology; Pathology; or Dermatology.
Table 3. The determinants of anxiety of participating medical students towards genAI including the FAME sub-scales.
Table 3. The determinants of anxiety of participating medical students towards genAI including the FAME sub-scales.
Variable Category Fear Anxiety Mistrust Ethics
Mean±SD 6 p value Mean±SD p value Mean±SD p value Mean±SD p value
Sex Male 9.55±3.31 0.972 9.00±3.61 0.739 12.3±2.75 0.728 10.86±3.10 0.983
Female 9.42±3.77 8.80±3.78 12.41±2.82 10.86±2.66
Level Basic 9.57±3.54 0.683 8.85±3.62 0.781 12.33±2.70 0.596 10.65±2.86 0.108
Clinical 9.31±3.52 9.04±3.84 12.39±2.97 11.31±2.96
GPA 1 Unsatisfactory, satisfactory, good 9.51±3.53 0.803 9.36±3.81 0.082 12.14±2.87 0.277 10.86±2.88 0.987
Very good, excellent 9.46±3.54 8.48±3.52 12.55±2.69 10.86±2.93
Desired specialty Low risk 3 9.85±3.43 0.504 9.09±3.62 0.796 12.35±2.87 0.953 11.18±2.88 0.812
Middle risk 4 9.10±3.58 8.79±3.92 12.44±2.74 11.19±2.71
High risk 5 8.94±3.57 9.63±3.58 12.94±1.88 10.75±2.98
Number of genAI 2 models used 0 10.31±3.38 0.362 9.47±3.49 0.581 12.27±2.59 0.106 10.87±2.52 0.496
1 9.06±3.56 8.51±3.61 12.57±2.91 10.78±3.05
2 9.39±3.60 8.88±3.85 12.64±2.19 11.18±2.78
3 9.80±4.21 10.40±5.90 9.80±4.44 11.40±4.93
4 8.75±3.20 8.75±3.20 9.75±2.63 9.00±2.58
How anxious are you about genAI models like ChatGPT as a future physician? Not at all 7.48±3.62 <0.001 7.29±4.06 <0.001 12.13±3.04 0.590 10.04±2.97 0.014
Slightly anxious, somewhat anxious, or extremely anxious 10.53±3.00 9.75±3.17 12.46±2.64 11.29±2.78
1 GPA: Grade Point Average; 2 genAI: Generative Artificial Intelligence; 3 Low risk specialties: Pediatrics; General Surgery; Forensic Medicine; Orthopedics; Neurosurgery; Ophthalmology; or Plastic Surgery; 4 Middle risk specialties: Internal Medicine; Psychiatry; Emergency Medicine; Obstetrics and Gynecology; Urology; or Anesthesiology; 5 High risk specialties: Radiology; Pathology; or Dermatology; 6 SD: Standard deviation. Statistically significant p values are highlighted in bold style.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated