1. Introduction
The widespread availability of generative Artificial Intelligence (genAI) models (e.g., ChatGPT, Gemini, Microsoft Copilot, and Llama) is set to transform various occupational sectors including the higher education sector, especially in medical education and healthcare practice [
1,
2,
3,
4,
5,
6]. For example, genAI can help in the automation of routine administrative and educational tasks such as scheduling, aid in response to student inquiries, and assist in the delivery of basic educational content in an interesting and personalized style [
7,
8]. Subsequently, the genAI models can help educators and administrators focus on more complex, value-added activities in higher education such as personalized teaching and research activities [
9,
10].
Additionally, genAI can be extremely helpful in medical education by offering sophisticated, novel simulations and modeling, especially for practical training, which are invaluable in healthcare education [
2,
3,
11,
12,
13]. Moreover, genAI models can facilitate educational initiatives without substantial additional resources, which helps to promote educational equity at the global level [
14].
Several potential benefits for genAI models in healthcare education, research, and practice have been well recognized and characterized [
3,
11,
15]; however, there are growing concerns about the negative implications of these AI models for the workforce, particularly regarding job displacement and the changing nature of health professional roles [
16,
17,
18]. Consequently, these concerns can lead to resistance to AI implementation among health professionals, which can hinder the full realization of genAI advantages in healthcare [
19].
Medical students are considered a key group to the future healthcare workforce. Therefore, the ability of medical students to adapt their career paths to emerging technologies such as genAI is essential for them to thrive amid evolving healthcare practice dynamics which inevitably would be shaped by AI integration [
20,
21].
The rapid adoption of genAI models into the sector of healthcare raised concerns about job security, especially for medical students who are set to enter the healthcare workforce [
22,
23]. Thus, evaluating medical students’ anxiety and fear regarding potential job displacement by genAI appears a timely topic for research investigation. Understanding the concerns of medical students towards genAI models can offer insights into the effects of this novel technology on health profession identity, future competitiveness, as well as the ethical dilemmas that may arise as a result of genAI models’ integration in healthcare [
24,
25,
26].
The increasing popularity of genAI models’ use in healthcare heightened job displacement concerns with evidence suggesting gaps in practical AI experience and knowledge among the currently practicing health professionals [
27,
28]. The AI-driven shift in healthcare practice would redirect the focus from traditional patient-centered care to technology-centered methods, raising questions about the need to redefine the future roles of health professionals [
29]. Subsequently, the AI-driven changes in healthcare could lead to depersonalization of care, which is a major challenge to the core healthcare values of empathy and human judgment which emphasizes the critical need to explore these evolving dynamics in a comprehensive and evidence-based approach [
30,
31].
On a related note, ethical considerations are central to medical students’ perceptions of genAI models’ utility in healthcare practice [
32,
33]. Concerns about patient privacy, data security, and potential AI-induced healthcare disparities illustrate the complex ethical challenges that future physicians will face [
34]. Additionally, there are significant questions regarding whether the current medical education practices adequately prepare medical students for an AI-dominated healthcare practice soon [
35].
Based on the aforementioned points, the integration of genAI models into healthcare is expected to provoke anxiety among medical students, with subsequent fear of job displacement, loss of professional identity, and ethical dilemmas [
36,
37,
38]. Therefore, investigating medical students’ perspectives on genAI models and their concerns about this emerging technology is important. This area of investigation can be crucial from the medical educational perspective, in order to prepare future physicians for an AI-driven era in healthcare practice and to equip future physicians with the necessary tools to improve patient care and healthcare outcomes [
1,
39,
40,
41].
Therefore, this study aimed to assess medical students’ fears, anxiety, and concerns regarding genAI models’ roles in healthcare. The study objectives included assessing the key factors driving this fear and anxiety, with the ultimate aim of providing preliminary evidence to guide the development of targeted AI integration interventions in medical education, policy modifications, and improvements in genAI implementation strategies in medical education. The major aim of this study was to address these genAI-related concerns among medical students, to ensure that future healthcare physicians are prepared and confident in AI-integrated healthcare settings.
2. Materials and Methods
2.1. Study Design and Ethical Permission
This pilot study was based on a cross-sectional survey involving medical students currently studying in Jordan. A convenience sampling strategy was employed to expedite participant recruitment given the timeliness of the study topic. The recruitment of potential medical students was done using social media and instant messaging applications including Facebook, X (formerly Twitter), Instagram, LinkedIn, Messenger, and WhatsApp, all of which are popular among the target demographic, namely medical students in Jordan.
The sampling process started by the authors who were medical students at the time of survey distribution (Y.A., O.A., A.A.-S., Z.A., and A.N.A.), with the encouragement of further distribution of the survey among their acquaintances from medical students in Jordan (snowball sampling). The survey was distributed in Arabic language via Google Forms as the questionnaire host and no incentives were offered for participation. The survey distribution took place during July–August 2024.
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board in the Faculty of Pharmacy at the Applied Science Private University (reference number: 2024-PHA-25) which was granted on 4 July 2024.
2.2. Survey Instrument Development
The initial phase of survey development involved a literature review by the first and senior authors (M.S. and M.B.) independently using Google Scholar and the search concluded on 4 June 2024. The following search terms were used to ensure a comprehensive understanding of the current role of genAI in healthcare and medical education: “generative AI in healthcare education”; “generative AI and anxiety of health professionals”; “generative AI concerns among health professionals”; “fear of generative AI in healthcare”; “healthcare job displacement by generative AI”; “ChatGPT anxiety in healthcare”; “ChatGPT fear in healthcare”; “ChatGPT concerns in healthcare”; “healthcare job displacement by ChatGPT”; “medical job displacement by ChatGPT”; and “AI anxiety among medical students”. This was followed by identification of research records in English that were deemed relevant for the development of a tailored survey instrument for the study objectives by joint discussions among the first and senior authors (M.S. and M.B.) [
3,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57].
A collaborative effort involving the first, second, and senior authors was conducted to critically analyze the retrieved literature, leading to the development of a set of survey questions for the study objectives. The first and senior authors have a combined 13 years of teaching experience in healthcare, while the second author is a recently graduated Doctor of Medicine (M.D.). This combination of broad academic expertise and recent practical experience in medical learning was undertaken to enhance the content validity of the survey questions, with subsequent identification and integration of four themes subjectively deemed relevant to cover the study objectives as follows: (1) Fear related to possible job displacement by the availability of efficient, robust, and beneficial genAI; (2) Anxiety regarding the long-term medical career prospects in light of genAI models availability; (3) Mistrust related to concerns of reducing the human role in healthcare practice by genAI, leading to dehumanization in medical interactions and decision-making; and (4) Ethical dilemmas and concerns that may arise if genAI models utilization become integrated as a routine in healthcare practice. Based on the initials of the dimensions, the scale was referred to as the “FAME” scale.
To enhance the validity of the survey, we opted to check the content of the initial draft by seeking feedback from two lecturers involved in medical education: A lecturer in Microbiology and Immunology, specializing in basic medical sciences, and a lecturer in Internal Medicine with a specialty in Endocrinology, with extensive involvement in supervising fourth-year medical students during their introductory course for clinical rotations. This process involved seeking specific feedback on the contextual relevance and comprehensiveness of the included four themes, thereby enhancing the content validity of the novel survey instrument.
Afterward, the survey instrument underwent a pilot test involving five medical students selected to represent a diverse range of perspectives within medical education (three third-year students and two fifth-year students). The feedback required included notes on the clarity, relevance, and language of the survey items. Following the obtained feedback, minor refinements of the survey items were conducted to improve the clarity of the language of survey items by adjustments involving simplifying complex language and enhancing the flow of the questionnaire.
2.3. Final Survey Used
Prior to participation, all medical students were provided with detailed information about the study objectives, inclusion criteria, and confidentiality measures to protect the anonymity of the responses obtained. Electronic consent was mandatory from each participant with “Yes” as a response for the following item necessary for opening of the survey.
The survey started with a demographics section gathering data on age (as a scale variable); sex (male vs. female); current year of study (1st, 2nd, 3rd, 4th, 5th, or 6th year) later classified as pre-clinical (1st, 2nd, or 3rd) vs. clinical (4th, 5th, or 6th year); latest Grade Point Average (GPA) classified as unsatisfactory, satisfactory, good, very good, or excellent later classified as low (unsatisfactory, satisfactory, or good) vs. high (very good or excellent); and the desired future specialty (Pediatrics; General Surgery; Forensic Medicine; Orthopedics; Neurosurgery; Ophthalmology; Plastic Surgery; Internal Medicine; Psychiatry; Emergency Medicine; Obstetrics and Gynecology; Urology; Anesthesiology; Radiology; Pathology; Dermatology; or Others).
Then, the survey included a major question considered as the primary study measure measured on a 4-point Likert scale (Not at all, slightly anxious, somewhat anxious, extremely anxious) to assess the level of anxiety medical students feel towards generative AI technologies, such as ChatGPT. This question was “How anxious are you about genAI models like ChatGPT as a future physician?”.
Next, the participants were asked about their previous use of genAI models (ChatGPT, MS Copilot, Gemini, My AI Snapchat, others).
Finally, the main body of the survey consisted of twelve items assessed on a 5-point Likert scale (strongly agree, agree, neutral, disagree, strongly disagree). These items were designed to as three questions per dimension as follows: (1) I feel concerned that AI will reduce the number of available jobs in healthcare; (2) I am concerned that AI will exceed the need for human skills in many areas of healthcare; (3) The advancement of AI in healthcare makes me anxious about my long-term career; (4) I am worried that I will not be able to compete with AI for jobs in the healthcare; (5) The thought of competing with AI for career opportunities in healthcare makes me feel anxious; (6) I am worried that AI will demand competencies beyond the current scope of medical teaching; (7) I believe that AI is missing the aspects of insight and empathy needed in medical practice; (8) I believe that AI does not take into account the personal and emotional aspects of patient care; (9) I believe that the essence of the medical profession will not be affected by AI technologies; (10) I believe that AI will lead to ethical dilemmas in healthcare; (11) I fear that AI in healthcare will compromise patient privacy and data security; and (12) I worry that AI will increase inequalities in patient care.
2.4. Sample Size Calculation
The required sample size was calculated using G*Power software [
58,
59], assuming a small to medium effect size of 0.3, with an α level of 0.050, and targeting a power of 95%. Based on these specifications, recruitment of 147 participants was determined to be essential to ensure adequate statistical power for comparison between two groups (medical students who were not anxious at all regarding AI vs. medical students who expressed anxiety at any level).
2.5. Statistical and Data Analysis
Statistical analyses were conducted using IBM SPSS Statistics for Windows, Version 27.0 (Armonk, NY: IBM Corp), with statistical significance established at p < 0.050. The Kolmogorov-Smirnov test was employed to assess data normality. Cronbach’s α was calculated to evaluate the internal consistency of survey constructs. The Intraclass Correlation Coefficient (ICC) was employed to assess the reliability of measurements under a one-way random model given the uniform style of survey administration to ensure that any measurement error would be random and not due to systematic differences in how measurements were taken. For effect size between two groups, Cohen’s d was utilized with Hedges’ correction, which adjusts for bias in the estimation of the standard deviation (SD) in small samples. To take into account non-normality of the scale variables, the effect size analysis was supplemented by measuring point-biserial correlation coefficients using bivariate Pearson correlations, with correlation coefficients (r) acting as surrogates to effect sizes. Nonparametric tests, including the Mann-Whitney U test for two independent samples and the Kruskal-Wallis test for more than two groups, were applied, given that the scale variable did not meet normality assumptions (p ≤ 0.001 using Kolmogorov-Smirnov test). Additionally, Chi-square tests were used to explore associations between categorical variables.
Medical specialties were categorized based on the risk of job displacement by generative AI, factoring in the extent to which each specialty relies on procedural skills, personalized interactions, and automatable tasks and this categorization was agreed upon by the first, second, and senior authors based on the following criteria. High-risk specialties, such as Radiology, Pathology, and Dermatology, involve significant use of diagnostic imaging and pattern recognition that AI could replace. Middle-risk specialties like Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology, could see moderate impacts from AI but retain crucial human elements. Low-risk specialties, including Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery, involve complex decision-making and personalized care that are difficult to automate.
3. Results
3.1. General Features of Participating Medical Students
The final study sample comprised a total of 164 medical students with a mean age of 21.1±2.3 years, with a majority of male students (
n = 88, 53.7%), and students at the basic educational level (
n = 113, 68.9%,
Table 1).
Slightly less than three-quarters of the study participants reported prior use of at least a single genAI model, while slightly more than half of the study participants reported a cumulative GPA of very good or excellent categories (
Table 1).
For the genAI models used, the most common reported model was ChatGPT (
n = 105, 64.0%), followed by My AI Snapchat (
n = 29, 17.7%), Gemini (
n = 17, 10.4%), and Copilot (
n = 13, 7.9%,
Figure 1).
3.2. The Level of Anxiety towards genAI and Its Associated Determinants
Slightly over a third of the study sample reported being not anxious at all regarding the role of genAI such as ChatGPT as future physicians (
n = 56, 34.1%), with 61 students who reported being slightly anxious (41.5%), 36 students who reported being somewhat anxious (22.0%), and only four students who reported being extremely anxious (2.4%). The demographic data did not show any statistically significant differences in the level of anxiety regarding genAI by the participants (
Table 2).
3.3. FAME Constructs Reliability
The Cronbach’s α for the four FAME sub-scales were as follows: 0.874 for the Fear sub-scale, 0.880 for the Anxiety sub-scale, 0.724 for the Mistrust sub-scale, and 0.695 for the Ethics sub-scale. For the 12 items combined together, the Cronbach’s α was 0.853 reflecting robust internal consistency.
In terms of the ICC, the Fear sub-scale showed high reliability, with an ICC of 0.678 for single measures and 0.863 for average measures. The Anxiety sub-scale also exhibited high reliability, with an ICC of 0.673 for single measures and 0.860 for average measures. The Mistrust sub-scale displayed moderate reliability at 0.405 for single measures, but this increased to good reliability at 0.671 for average measures. Similarly, the Ethics sub-scale showed moderate reliability at 0.388 for single measures and improved to 0.656 for average measures, indicating enhanced reliability and consistency when averaged across respondents. The ICC values for the FAME sub-scales are detailed in (
Figure 2).
3.4. FAME Constructs Scores
For the four FAME constructs, the higher level of agreement by the participants was seen in the Mistrust construct (mean: 12.35±2.78), followed by the Ethics construct (mean: 10.86±2.90), Fear construct (mean: 9.49±3.53), while the lowest level of agreement was seen in the Anxiety construct (mean: 8.91±3.68,
Figure 3). Statistically significant lower levels of agreement were seen among the participating medical students who were not anxious at all compared to those who showed any level of anxiety as future physicians towards genAI for three constructs, namely Fear, Anxiety, and Ethics (
Figure 4).
3.5. Determinants of Anixety to genAI among the Participanting Medical Students
Table 3 summarizes the determinants of Fear, Anxiety, Mistrust, and Ethics among the participating medical students towards genAI. No significant differences in Fear, Anxiety, Mistrust, or Ethics scores were found across sexes, academic levels—basic and clinical—, GPA categories; however, lower Anxiety scores were marginally significant (
p = 0.082) among students with higher academic achievement reflected in a higher GPA.
When analyzed by desired specialty, no significant differences emerged among students aspiring to low, middle, or high-risk specialties, indicating uniform perceptions across different desired fields. The number of genAI models used by the students also did not significantly influence Fear, Anxiety, Mistrust, or Ethics scores, suggesting a consistent perception regardless of exposure level to genAI tools.
Importantly, students who reported not being at all anxious about genAI models like ChatGPT had significantly lower fear and anxiety scores (p < 0.001 for both), lower ethics score (p = 0.014) compared to the students who expressed any level of anxiety towards genAI (Table 3). However, this statistically significant difference did not extend to the Mistrust sub-scale between the two groups (p = 0.590).
In the analysis to assess the impact of anxiety towards generative AI models on the perceptions of fear, anxiety, and ethics, significant effects were observed, as indicated by substantial effect sizes calculated using Cohen’s d and Hedges’ g. Specifically, for the Fear construct, Cohen’s d yielded a point estimate of 2.332 with a 95% confidence interval (CI) of 2.035 to 2.627, indicating a very large effect size (Pearson r = 0.411, p < 0.001). Similarly, Hedges’ correction resulted in a point estimate of 2.327 (95% CI: 2.031–2.621). For the Anxiety construct, Cohen’s d provided a point estimate of 2.038 (95% CI: 1.768–2.306), and Hedges’ correction gave a point estimate of 2.033 (95% CI: 1.764–2.300, Pearson r = 0.319, p < 0.001). For the Mistrust construct, Cohen’s d provided a point estimate of 3.831 (95% CI: 3.387– 4.272), and Hedges’ correction gave a point estimate of 3.822 (95% CI: 3.379–4.263, Pearson r = 0.058, p = 0.462). Lastly, for the Ethics construct, Cohen’s d revealed a point estimate of 3.243 (95% CI: 2.858–3.625), with Hedges’ correction closely aligning at a point estimate of 3.235 (95% CI: 2.852–3.617, Pearson r = 0.205, p = 0.008).
4. Discussion
The analysis of fear, anxiety, mistrust, and ethical concerns among medical students in this study regarding genAI models revealed helpful insights. These insights highlighted the psychological and ethical dimensions that would influence the adoption of emerging technologies such as genAI models in medical education.
Our study revealed significant concerns among medical students in Jordan regarding the implications of genAI models for their future careers. Surveying 164 students, we found that a substantial majority, over 72%, have already been using genAI models, particularly ChatGPT. This high usage rate was consistent with global trends suggesting that reliance on genAI models for academic support is becoming increasingly normalized. For example, a multi-national study involving participants from Brazil, India, Japan, the UK, and the USA highlighted that most students use ChatGPT for assistance with assignments and expect their peers to do the same, signaling a shift towards widespread acceptance of genAI tools in academic settings [
60].
Moreover, a comprehensive study across several Arab countries—including Iraq, Kuwait, Egypt, Lebanon, and Jordan—engaged 2,240 participants, and revealed that nearly half were aware of ChatGPT, and over half of them had used it before the study [
61]. The favorable disposition towards genAI exemplified by ChatGPT was influenced by its ease of use, positive technological attitudes, social influence, perceived usefulness, and minimal perceived risks and anxiety [
61]. Similarly, a recent study from the United Arab Emirates (UAE) supports these findings, with the majority of students reporting routine use of ChatGPT, driven by its utility, ease of use, and the positive influence of social and cognitive factors [
62]. Furthermore, a recent study among medical students in the U.S. showed that almost half of the surveyed students indicated the use of ChatGPT in medical studies [
63]. Taken together, our findings along with those of recent studies collectively highlight a broader acceptance and integration of genAI models among university students, shaped by genAI models’ utility, ease of integration into daily tasks, and the broader, positive social perception of technological engagement [
64,
65,
66,
67].
In this study, approximately two-thirds of medical students reported experiencing at least a mild level of anxiety about genAI. This anxiety was notably pervasive across various demographics, including sex, academic level, and GPA, reflecting a widespread apprehension about genAI among participants. This universal concern is understandable given the broader levels of apprehensions about genAI observed among university students globally. For example, a recent study that was conducted in Hong Kong involving 399 students across six universities and ten faculties revealed significant concerns [
67]. Students feared that genAI could undermine the value of their university education and impede the development of essential skills such as teamwork, problem-solving, and leadership [
67]. Additionally, a study among medical students in the UAE reported that a majority of participants were worried that AI would reduce trust in physicians besides worries regarding the ethical impact of AI in healthcare [
21]. Moreover, a study among health students in Jordan based on the technology acceptance model (TAM) revealed an overall anxious attitude towards ChatGPT among the participants [
51]. This highlights the need for educational strategies that effectively integrate genAI into medical curricula while addressing the underlying justifiable anxiety among medical students.
Despite the notable anxiety reported in this study, the findings revealed even more pronounced levels of mistrust and ethical concerns among the participating medical students. Interestingly, while a related study among college students in Japan found a significant focus on unemployment as a major ethical issue with AI [
68], our findings suggest that concerns extend beyond personal job security to include broader ethical and trust issues associated with genAI applications in healthcare.
Of note, the study findings elucidated the significant psychological and ethical impacts of anxiety toward genAI models on medical students, revealing profound concerns across fear, anxiety, mistrust, and ethical considerations. The strong link to the fear construct illustrated how apprehensions about genAI correlated with both general anxiety and specific fears concerning the future of medical practice and job security—a common trend observed with the introduction of new technologies [
69,
70,
71,
72]. These anxieties are likely fueled by uncertainties over how genAI might transform traditional medical roles, potentially replacing tasks currently undertaken by humans, thus sparking fears of job displacement and the diminution of human-centric skills in healthcare [
73].
The fear of job displacement by genAI is not unique to the healthcare sector as it resonates across various other occupational sectors. For example, studies in fields ranging from accounting to manufacturing have identified a correlation between the rise of AI and increased job displacement concerns, with policy recommendations often advocating for talent retention, investment in upskilling programs, and support mechanisms for those adversely affected by AI adoption [
74]. In Germany, a manufacturing sector survey indicated that employee fears regarding AI are among the top barriers to its adoption, with non-managerial staff particularly expressing apprehension about AI implications for job security and workplace dynamics [
75]. A study from Turkey revealed that while teacher candidates across various disciplines, ages, and genders show no apprehension about learning AI, they do express significant anxiety about its potential effects on employment and social dynamics [
76].
In this study, concerns among medical students about job displacement, while significant, were overshadowed by issues of ethics and mistrust. These findings reflected apprehensions about the ethical and empathetic dimensions of care—areas where AI is often perceived as lacking, as noted by Farhud and Zokaei [
36], despite the presence of recent evidence contradicting this viewpoint, including the promising potential of clinically oriented genAI models [
77,
78]. The pronounced levels of mistrust and ethical concerns in this study may indicate that medical students fear not only the potential job displacement but also doubt the capacity of genAI to fulfill crucial humanistic aspects of healthcare, such as empathy and ethical judgment [
79]. Our findings support this perspective, with the mistrust construct receiving the highest level of agreement among participants. This skepticism is deeply rooted in doubts about genAI’s ability to effectively handle the complex aspects of empathy and ethical decision-making, as perceived by the medical students involved in our study.
4.1. Recommendations Based on the Study Findings
The findings of this study highlighted the critical need for evolving medical curricula that incorporate comprehensive AI coverage including genAI training [
80,
81,
82]. This modification is recommended to illuminate the technical capabilities of AI and to clarify its role in supplementing rather than replacing the role of human physicians [
3,
83]. This involves emphasizing the genAI potential to enhance diagnostic precision, personalize treatment plans, and improve administrative efficiency which has been shown thoroughly in recent literature [
3,
6,
11,
84,
85].
To address the considerable ethical concerns and mistrust regarding genAI among medical students, it is important to encourage ethical discussions on AI usage, data privacy, and patient-centered care in the medical training framework [
86,
87]. Incorporating role-playing, case studies, and ethical debates will help students competently train on the intricate moral issues they will encounter in their professional lives [
40]. An important study by D’Souza et al. outlined twelve critical tips for addressing the major ethical concerns associated with the use of AI in medical education [
88].
Moreover, as AI automates many technical tasks, enhancing uniquely human soft skills like emotional intelligence, communication, leadership, and adaptability becomes crucial [
89]. By promoting AI as a collaborative tool in healthcare rather than a competitor and highlighting examples where AI and human physicians synergistically improve patient outcomes, AI would be viewed as an indispensable partner in healthcare [
90].
Additionally, advocating for policies that protect healthcare workers’ job security in the wake of AI integration is important [
22,
73,
91]. Clear guidelines on AI’s role in healthcare will ensure that it supports rather than replaces medical professionals [
92,
93]. Ongoing research into AI’s impacts, coupled with open dialogues among AI developers, healthcare professionals, educators, and policymakers, will adapt strategies to ensure AI enhances rather than disrupts healthcare services [
1,
94].
This study advocates for medical curricula to thoroughly prepare future healthcare providers to integrate AI into their practices effectively, ensuring they deliver compassionate, competent, and ethically sound health care [
95].
4.2. Study Limitations
The interpretation of this study results must be approached with caution due to several limitations as follows. First, the cross-sectional survey design prevented establishing causality between medical students’ perceptions of genAI and the other study variables. Longitudinal studies are needed for tracking changes in students’ perceptions of AI as genAI technology progresses rapidly.
Second, the study relied on convenience and snowball sampling approaches for swift data collection which are expected to introduce a notable sampling bias. These methods depend heavily on existing social networks and the participant’s willingness to engage, potentially misrepresenting the broader medical student population in Jordan and beyond. Consequently, the results may not be generalizable to all medical students or to other demographic groups due to non-random sampling used and the inherent selection bias, including a possible over-representation of students more familiar with genAI models.
Third, using social media and instant messaging platforms for student recruitment likely biased the study sample toward students who hold specific views on technology that may not reflect the broader medical students’ perspectives. Distributing the survey solely in Arabic further limited the diversity of responses, potentially impacting the depth of insights into how students’ perceptions vary across different cultural or educational backgrounds.
Finally, while the literature review for constructing the survey instrument was thorough, the selection of sources and subsequent survey questions may have been influenced by our subjective biases, shaped by our backgrounds and personal experiences in healthcare education. This subjective approach might have resulted in overlooking other relevant themes or emerging trends on genAI concerns that were unrecognized. Thus, further testing and validation of the survey instrument used in this study is strongly recommended in future studies.
Author Contributions
Conceptualization, M.S.; methodology, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; software, M.S.; validation, M.S. and M.B.; formal analysis, M.S..; investigation, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; resources, M.S.; data curation, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; writing—original draft preparation, M.S.; writing—review and editing, M.S., K.A.-M., Y.A., O.A., A.A.-S., Z.A., A.N.A. and M.B.; visualization, M.S.; supervision, M.S.; project administration, M.S. All authors have read and agreed to the published version of the manuscript.