Preprint
Article

Assessing the Guidelines on the Use of Generative Artificial Intelligence Tools in Universities: A Survey of the World’s Top 50 Universities

Altmetrics

Downloads

137

Views

143

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

02 December 2024

Posted:

03 December 2024

You are already at the latest version

Alerts
Abstract
The widespread adoption of generative Artificial Intelligence (GenAI) tools in higher education has necessitated the development of appropriate and ethical usage guidelines. This study aims to explore and assess publicly available guidelines covering the use of GenAI tools in universities, following a predefined checklist. We searched and downloaded publicly accessible guidelines on the use of GenAI tools from the websites of the top 50 universities globally, according to the 2025 QS university rankings. From the literature on GenAI use guidelines, we created a 24-item checklist, which was then reviewed by a panel of experts. This checklist was used to assess the characteristics of the retrieved university guidelines. Out of the 50 university websites explored, guidelines were publicly accessible on the sites of 41 institutions. All these guidelines allowed for the use of GenAI tools in academic settings provided that specific instructions detailed in the guidelines were followed. These instructions encompassed securing instructor consent before utilization, identifying appropriate and inappropriate instances for deployment, employing suitable strategies in classroom settings and assessment, appropriately integrating results, acknowledging and crediting GenAI tools, and adhering to data privacy and security measures. However, our study found that only a small number of the retrieved guidelines offered instructions on the AI algorithm (understanding how it works), the documentation of prompts and outputs, AI detection tools, and mechanisms for reporting misconduct. Higher education institutions should develop comprehensive guidelines and policies for the responsible use of GenAI tools. These guidelines must be frequently updated to stay in line with the fast-paced evolution of AI technologies and their applications within the academic sphere.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Significant advancements have been made in the development of large language models (LLMs). These models belong to a type of Artificial Intelligence (AI) known as Generative Artificial Intelligence (GenAI) and were trained on extensive datasets [1]. The training equips LLMs with the ability to generate new multimodal content, including text, images, program code, audio, and videos, in response to user’s prompts [2]. Following the release of ChatGPT (a family of LLMs from OpenAI) in November 2022, students, academics, and support staff at higher education institutions began using it for learning, teaching, and research. However, the performance of GenAI tools in terms of the accuracy, currency, and reliability of the content they generate in response to various prompts remains unpredictable [3] and prone to hallucinations (whereby LLMs generate plausible sounding but factually incorrect information) [4].
GenAI tools have rapidly become one of the most widespread and influential instruments in education in recent times and are reshaping the learning landscape in unprecedented ways [5]. Their abilities to generate content, help in problem solving, and enhance personalized learning experiences provide an excellent opportunity for self-learning and improving learner’s knowledge. However, the adoption of GenAI in higher education raises significant concerns regarding academic integrity and the protection of personal data [6].
Consequently, the debate on the ethical implications of using GenAI tools in academic contexts has intensified. Additionally, it is crucial to address the issues related to plagiarism, unauthorized or undeclared assistance, and the potential mishandling of sensitive information [7]. Initially, some educational institutions contemplated completely banning the use of GenAI tools. However, they soon realized that simply banning them would not solve the challenges associated with using these tools [8]. Therefore, educational institutions have realized the importance of establishing clear guidelines on the responsible use of GenAI tools for faculty and students. Educators can effectively leverage the advantages of AI technology by adopting a balanced strategy that maintains academic excellence and guarantees data privacy protection [9].
GenAI has very quickly become prevalent in academic circles. Its ease of access and user-friendly nature, particularly its ability to generate responses similar to those of a human, made it increasingly attractive and exciting to both student and faculty [10]. However, its widespread use has led to significant debates regarding how to regulate its use in scientific and academic writing [11]. The use of GenAI for producing content and aiding in research raises questions about the authorship and copyright ownership of generated content. These factors highlighted the critical need for the establishment of thorough guidelines for its correct use [12]. Therefore, academic publishers and higher education institutions worldwide are doing their utmost to develop guidelines to govern the ethical and responsible integration of GenAI into scholarly endeavours [13].
This study aimed to identify and assess the guidelines covering the use of GenAI tools in universities following a pre-determined checklist. Scholarly publishers such as Elsevier, Springer Nature, and Sage [14,15,16] have also released their own guidelines for authors, reviewers, and editors, but this study only focuses on guidelines produced by world-leading universities for their students and faculty.

1.1. Research Questions

The following research questions guided the study:
What general information is included in the universities’ guidelines on the use of GenAI tools?
What instructions are provided in the universities’ guidelines covering the use of GenAI tools in academic activities?
What specific instructions regarding ethical and legal issues are offered in the universities’ guidelines on the use of GenAI tools?
The remainder of this study is organized into the following sections: Section 2 reviews the existing literature, Section 3 outlines study methodology, Section 4 presents the results, Section 5 engages in a discussion of study findings, Section 6 addresses the limitations of the study, and Section 7 concludes with recommendations for policy formulation.

2. Literature Review

The UK’s Russell Group [17] developed a set of principles to guide the use of GenAI tools in higher education institutions. These principles include enhancing AI literacy, equipping staff with the necessary skills to support students, incorporating GenAI in teaching and assessment methods, and fostering collaboration and exchanging best practices. These guidelines were set out to support responsible, ethical, and appropriate use of GenAI in education and learning. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has established detailed guidelines for integrating GenAI in education in an ethical way. The UNESCO guidelines emphasize the protection of data privacy and aim to improve AI competencies of all stakeholders. UNESCO also provided a framework to support formulating policies, rules, and regulations regarding applying GenAI tools in teaching, learning, and research [11].
Chan [3] surveyed teachers, students, and support staff across universities in Hong Kong to develop a proposal for an “AI Ecological Education Framework.” This framework aims to adopt AI into teaching and learning across academic institutions. It includes three main dimensions: a pedagogical dimension focusing on integrating AI in teaching, learning, and assessment; a governance dimension dealing with important issues such as privacy, security, and accountability; and an operation dimension that addresses the problems related to training, support, and monitoring. Moorhouse, Yeo and Wan [18] reviewed the universities’ guidelines for the use of GenAI tools in assessment. They found that among the top 50 universities in the world, as ranked by the Times Higher Education (THE) World University Ranking for 2023, less than half of the universities developed and uploaded AI-related guidelines to their websites. Their research revealed that the development of policies is the ‘need of the hour’ to ensure academic integrity, provide advice on assessment design, and prepare educators to effectively communicate with students regarding these new tools. They concluded that instructors also need to be trained in the use of GenAI tools in assessments.
Farrelly and Baker [1] reported that the incidence of violating academic integrity using GenAI tools by minority groups or international students whose first language is not English is higher than domestic students. Consequently, they recommended the development of frameworks for AI literacy and usage to empower students to correctly and ethically harness the full potential of GenAI tools in education. Dogan and Medvidović [6] also emphasized the integration of ethical and responsible use of GenAI in the curriculum, along with AI-focused training for faculty members and liaison among faculty and AI developers to develop GenAI tools tailored to educational needs.
Ghimire and Edwards [7] reported that US higher education institutions were more inclined to work on AI-related policies than high schools. They added that most educational institutions needed more specialized guidelines for the appropriate and responsible use of GenAI, as even those surveyed institutions with established policies often did not correctly address student privacy issues, ethics, and algorithm transparency. McDonald, Johri [19] examined the GenAI-related guidelines developed by 116 US universities. They found that most (63%) universities encourage using GenAI in learning and teaching. The GenAI guidelines focused on creating content, a sample GenAI syllabus, ethical and privacy issues, and use of GenAI detection tools.
Dabis and Csáki [13] analysed policy documents and guidelines from 30 leading universities ranked in the top 500 of the Shanghai lists between May and July 2023. The analysis highlighted key ethical considerations such as accountability, human oversight, and transparency. The study recommended that universities tackle emerging challenges from generative AI by allowing instructors to set and communicate guidelines for acceptable AI use in their courses. Driessens and Pischetola [8] assessed the GenAI-related policies of eight universities in Denmark. They found that the policies at Danish universities mostly covered the legal and ethical use of GenAI tools and their use in assessment. However, these policies did not address several important features, such as the political, economic, environmental, and long-lasting contexts and effects of GenAI on education.
Cacho [20] proposed a framework to incorporate GenAI into teaching and learning at higher education institutions. The framework consists of the following six sections: (i) rationale with respect to adopting GenAI in education; (ii) a position statement emphasizing a balanced strategy for integrating GenAI with core educational values; (iii) an operational definition of critical terms; (iv) guidelines for teachers; (v) guidelines for students; and (vi) guideposts for promoting and developing procedures for legal and ethical use of GenAI. Cacho suggested that academic institutions adapt these guidelines according to their context, environment, and needs. Yusuf, Pervin and Román-González [9] surveyed 1,217 students and teachers from 76 countries. More than 80% of their study participants were aware of the use of generative AI tools in education. Moreover, most participants supported the development of AI-related regulatory policies to integrate GenAI into education systems. The study concluded by recommending that ethical considerations and cultural diversity be considered while implementing policies. Smith, Tate [21] proposed a framework to support and promote the responsible utilization of GenAI in research. Smith gathered insight from two universities in Australia and existing literature. This framework emphasizes the importance of understanding the context or circumstances that led to developing the position statement, focusing on the implementation process, and continuous review and improvement.
The reviewed literature, summarized in Table 1, underscores significant progress in understanding the implications of GenAI tools in higher education, particularly regarding the establishment of guidelines, frameworks, and policies for their responsible use. However, despite these advancements, research on how universities are creating and implementing formal GenAI guidelines remains limited. Moreover, existing studies seldom analyse whether these guidelines comprehensively address all issues related to GenAI use. This study seeks to fill this gap by developing a comprehensive assessment checklist and then using it to examine the GenAI guidelines of the world’s leading universities, with the aim to provide insights into current practices and identify areas for improvement.

3. Methodology

We developed a draft 24-item assessment checklist guided by existing literature on guidelines for using generative AI tools in higher education [11,13,17,18,19]. The draft checklist was then shared with five faculty members who hold PhD degrees and are actively engaged in relevant research activities. These experts recommended several modifications to the checklist. Following their suggestions, the checklist was revised. The updated version was then pilot-tested using guidelines from universities other than the 50 institutions included in our study, which led to further minor adjustments before its finalization.
We retrieved the top 50 universities from Quacquarelli Symonds (QS) World University Ranking 2025 [22] and located their websites. We then searched all the identified 50 university websites for publicly available guidelines on the use of Generative Artificial Intelligence (GenAI) tools during July and August 2024.
The study employed a quantitative method to examine the contents of the 50 universities’ GenAI guidelines against the 24-item criteria established in our developed checklist. These criteria can be grouped into three categories: (i) a general information category consisting of seven items, including guideline issuing authority, release or update date, and objectives and scope, among others; (ii) a category covering instructions on the use of GenAI tools under ten items covering issues such as recognizing situations where the use of AI tools is inappropriate, providing details about the GenAI tool(s) employed and the prompts used, and documenting the GenAI output(s) obtained, among others; and (iii) a category dedicated to ethical and legal issues and comprising seven items related to maintaining academic integrity and dealing with misconduct, safeguarding data privacy and security, and properly referencing and citing outputs generated by AI. Appendix A provides a full list of all items under each of the three categories. Figure 1 presents a flowchart illustrating the methodology employed in this study.

4. Results

Of the 50 university websites searched, 41 (82%) had publicly available guidelines for using GenAI tools (Appendix B). However, no guidelines were found on the websites of nine universities. Among those nine, two indicated they were following UNESCO guidelines for policymakers on GenAI and education and were developing their own guidelines for integrating GenAI effectively within their educational system.

4.1. General Information Category

This category includes seven information items to answer our first research question concerning the provision of general information in the guidelines. Table 2 lists the presence (yes/no) of the seven information items from the general information category (Appendix A) in the retrieved guidelines, along with their frequency and percentage distribution in ranked order.
Table 2 reveals that most universities (37 out of 41, or 90%) included the introduction of GenAI tools in their guidelines. Additionally, the guideline issuing authority could be identified in the guidelines of 36 universities. In most cases (24 universities), academic bodies or centres focusing on teaching, learning, and innovation within the university developed the guidelines. In six universities, the guidelines were issued by their information systems/technology or communication centres. The university library took on this role in two universities through their library guides (LibGuides), while the President, Deputy Vice-Chancellor, Office of the Provost, and a General AI Advisory Committee were each responsible in one university.
Thirty-four universities (83%) included GenAI tool names as examples in their guidelines. ChatGPT was mentioned by 34 universities, followed by Google Gemini (formerly Google Bard), which 19 institutions noted; Microsoft Copilot was highlighted by 15; Microsoft Bing was mentioned by 10; and DALL-E (a text-to-image generation model) by seven universities. Fewer than five universities mentioned 27 different GenAI tools in their guidelines.
Thirty-one (76%) universities provided the objectives and scope of their guidelines, while 24 institutions (59%) included contact information, and 22 (54%) mentioned the release or update date of their guidelines. Of these, 12 guidelines were issued in 2023 and 10 in 2024, with the earliest released in February 2023 and the most recent in May 2024. Notably, only eight universities (20%) provided information on how AI algorithms function.

4.2. Instructions on the Use of GenAI Tools

This category encompasses ten information items to answer our second research question about offering specific instructions on the proper use of GenAI tools in the guidelines. Table 3 lists the presence (yes/no) of the ten information items from the ‘instructions on the use of GenAI tools’ category (Appendix A) in the retrieved guidelines, along with their frequency and percentage distribution in ranked order.
All 41 universities have clearly stated that using GenAI tools is allowed. A substantial majority (85%) also pointed out associated limitations and situations in which employing generative AI would not be suitable. Moreover, 76% or 31 of these institutions mandate obtaining the instructor’s approval before using AI tools in assignments. Twenty-nine (71%) of the retrieved university guidelines additionally focus on pinpointing specific domains where GenAI tools could be advantageously used and offer advice for their successful incorporation into classrooms and assessments.
Additionally, 63% of these guidelines included specifics on how to properly credit the use of a GenAI tool by including its description, name, version, and the date it was used. Over half of the guidelines offered advice on how to use and modify the output from Generative AI tools. Approximately one-third of the guidelines outlined the process for documenting the reason for using a GenAI tool, the prompts that were used, and the responses generated by the tool.
All 41 universities have clearly stated that using GenAI tools is allowed. A substantial majority (85%) also pointed out associated limitations and situations in which employing generative AI would not be suitable. Moreover, 76% or 31 of these institutions mandate obtaining the instructor’s prior approval before using AI tools in assignments. Twenty-nine (71%) of the retrieved university guidelines additionally focus on pinpointing specific domains where GenAI tools could be advantageously used and offer advice for their successful incorporation into classrooms and assessments.
Additionally, 63% of these guidelines included specifics on how to properly credit the use of a GenAI tool by including its description, name, version, and the date it was used. Over half of the guidelines offered advice on how to use and modify the output from Generative AI tools. Approximately one-third of the guidelines outlined the process for documenting the reason for using a GenAI tool, the prompt that was used, and the responses generated by the tool.

4.3. Instructions Regarding Ethical and Legal Issues

This category consists of seven information items to answer our third research question on the coverage of legal and ethical issues in the guidelines. Table 4 presents a detailed list of the seven items covering ethical and legal considerations (Appendix A) in the retrieved guidelines. For each item, the table provides the item’s frequency and percentage distribution in ranked order.
The analysis presented in Table 4 shows that instructions on evaluating and verifying the outputs of generative AI, maintaining academic integrity, and avoiding misconduct, along with ensuring data privacy and security, are the most commonly addressed topics in the examined university guidelines. These topics are covered by a significant majority of universities in our sample, with 35 (85%) to 38 (93%) of them addressing most or all of these issues. Additionally, issues such as compliance with legal standards and the proper referencing and citation of generative AI outputs were also discussed, but to a slightly lesser extent, being covered by 24 and 23 universities, respectively. On the other hand, the use of AI detection tools and the mechanisms for reporting academic misconduct were topics that received less attention, being included in the guidelines of only 17 and 14 universities, respectively.

5. Discussion

Since the initial public launch of ChatGPT in November 2022, many higher education institutions worldwide have begun releasing guidance on the use of GenAI tools in education settings [6] (Figure 2). The easy access to GenAI tools, especially ChatGPT, and increasing student use of these tools, e.g., in US medical schools [23], triggered the need to streamline their use in a way that does not compromise critical thinking [24] and led to the establishment of guidelines for their appropriate and legal use [3].
Our study found that of 50 top-ranked universities according to QS ranking, 41(82%) of universities developed GenAI-related guidelines for faculty members, students, and support staff and made them available on their public websites. An earlier 2023 study found that less than half (23) of the top 50 ranked universities in the Times Higher Education (THE) ranking had developed their guidelines [18]. This growing number of university guidelines year on year shows that universities are extremely aware of the need for AI-specific guidance and are responding by rapidly releasing their own versions of the guidelines. However, all higher education stakeholders need to stay abreast of the advancements in AI technologies and regularly revisit and update their guidelines accordingly [20].

5.1. First Research Question

Addressing our first research question regarding the provision of general information in the guidelines, our study found that most universities started their guidelines by introducing GenAI, providing information about the guideline issuing authority, and including examples of popular GenAI tools. ChatGPT was the one example that almost all guidelines in our sample mentioned. In this regard, our findings are aligned with those of Ghimire and Edwards [7] and Yusuf, Pervin and Román-González [9], who also identified ChatGPT as the most popular GenAI tool. However, it should be noted that the popularity of specific GenAI tools varies in different disciplines, universities, and countries [28]. Most guidelines also included a section covering the objectives and scope of the guidance. The guidelines reviewed in our study provided a comprehensive overview of GenAI, yet they notably needed to emphasize more the actual workings of GenAI algorithms. Grasping the basic concepts behind GenAI algorithms is crucial to fully understanding their limitations, such as the biases and hallucinations one can encounter while using these tools. This understanding is fundamental for anyone looking to properly use these tools, particularly in tasks involving data analysis and decision-making processes. Recognizing and addressing these limitations is essential for developing more equitable and effective AI systems [5]. Table 1 lists the presence (yes/no) of the seven information items from the general information category (Appendix A) in the retrieved guidelines, along with their frequency and percentage distribution in ranked order.

5.2. Second Research Question

Addressing our second research question about offering specific instructions on the proper use of GenAI tools in the guidelines, our study revealed that all university guidelines permit the use of GenAI in education settings, as long as the provided conditions and procedures for their appropriate and responsible use are followed. Most universities also mentioned the limitations of GenAI tools or those instances deemed unsuitable for using GenAI in order to protect students’ personal data and safety and avoid academic misconduct. The ability of GenAI tools to produce accurate, up-to-date, unbiased, and innovative content while safeguarding confidentiality is still elusive [5]. Users must exercise caution and verify outputs when dealing with these tools, given their often inconsistent, unpredictable, and fluctuating (stochastic) performance and their inherent proneness to bias and hallucinations.
Most universities also require students to seek the permission of instructors or supervisors before using GenAI tools in assignments. About two-thirds of guidelines provided details about the domains where GenAI tools can be helpful. These tools can assist in searching and reviewing the literature, citation management, summarizing, analysing, and writing manuscripts [5] but only with proper human oversight and verification of all generated outputs. Furthermore, almost two-thirds of the guidelines covered the strategies for using GenAI in classroom settings and assessments. GenAI tools can be a valuable addition to conventional teaching strategies, aiding teachers in incorporating AI into their teaching methodologies and evaluation processes. Educators can deploy GenAI tools to assist in planning curricula, creating course content, and conducting assessments. However, it is imperative that universities provide adequate training and support in AI literacy to all their academic stakeholders to ensure that GenAI is always used responsibly in a way that maintains and fosters academic honesty and integrity [20].
Approximately two-thirds of surveyed universities also provided instructions about how to declare and credit the use of GenAI in assignments by including details such as tool description, name, version, and the date it was used. More than half of all universities in our sample provided instructions regarding using and adapting GenAI outputs. Previous studies [3,8,9,13,19,20] also recommended that students must follow the guidance of their teachers or ask for further explanations when instructions are not clear. Additionally, they should acknowledge or reference any AI-generated content used in their academic projects and verify information derived from AI against primary sources, citing the original source whenever possible [29,30,31].
However, less than one-fifth of the guidelines in our survey discussed the requirements to have a clear purpose and strategy for using GenAI tools, and to document the details of prompts submitted to these tools as well as the corresponding outputs that were generated. Ardito [29] argues that documenting and reporting every interaction with GenAI tools, including prompts and outputs, can, in some scenarios, prove burdensome and unrealistic, and may deter students from utilizing GenAI tools to their full potential. However, the current recommended best practice when conducting research with LLMs in fields such as medicine is to fully document all interactions (perhaps as an appendix at the end of a paper or assignment if these interactions are too long to fully report in the main text) in order to improve transparency and reproducibility, and address issues such as ‘prompt brittleness’ (slight modifications in prompts leading to significantly different outputs) and LLM stochasticity (their ability to generate different responses when prompted repeatedly with exactly the same prompt) [31].

5.3. Third Research Question

To address our third research question on the coverage of legal and ethical issues in the guidelines, the vast majority of surveyed university guidelines in our study mentioned the need for critical evaluation and verification of information generated by GenAI tools for accuracy, including completeness and currency. Most guidelines also offered information about academic integrity, misconduct, and data privacy and security. More than half of the guidelines provided legal compliance instructions on using and citing GenAI tools in different contexts. The findings of this study are consistent with the outcomes of previous ones [9,21,32], revealing that the largest segment of the surveyed guideline documents is typically dedicated to the ethical and legal use aspects of GenAI tools. This shows that the primary emphasis of all AI-related guidelines is on upholding rigorous and robust academic criteria while protecting confidential data in all interactions with GenAI tools [13].
Moreover, our findings revealed that less than half of the university guidelines in our sample provided information on using AI detection tools to identify AI-generated text in academic works. Few guidelines mentioned the capability of Turnitin plagiarism detection software to detect AI-generated text. However, the reliability of AI detection tools is doubtful due to significant chances of false positive and false negative results [2,29]. To date, AI detection software remains far from ideal, with high false positives leading to false accusations of misconduct [33,34] Even Google’s sophisticated method for watermarking their Gemini AI-generated text is not without substantial limitations [35].
Furthermore, we found that around one-third of the guidelines in our sample provided information on how to report violations of academic integrity. This indicates the universities’ emphasis on enforcing the ethical and legal use of GenAI tools in academic settings. However, it appears that there is less emphasis on developing straightforward strategies for identifying undeclared AI-generated content in academic outputs, perhaps owing to the sheer difficulties associated with this task [36]. Ardito [29] argues that embracing GenAI in education requires reassessing our conventional strategies and mechanisms for ensuring academic integrity. Ardito recommends coming up with a new robust evaluation approach that promotes the application of AI in education and fosters creativity and innovation among learners.

5.4. More Than Just Guidelines: A Complete Curriculum Rethink Will Become Necessary

The current guidelines for using GenAI in education settings work more like a ‘quick patch’ on top of existing curricula. As GenAI and other AI approaches develop further in the coming years, they will undoubtedly reshape, if not disrupt, higher education, and we should begin now rethinking the purposes of education and curriculum design. For example, in medicine, the role of clinicians is gradually shifting from being repositories of medical knowledge to evaluators of AI-generated information in clinical settings, necessitating enhanced critical thinking and judgment skills and that medical curricula evolve accordingly to include fitting AI competencies in order to prepare future clinicians to work effectively alongside AI [30,37]. And in computer science, GenAI’s ability to generate programming code is forcing a change in how coding is being taught, with professors now shifting away from teaching syntax as such and emphasizing higher-level skills such as debugging and ensuring AI-generated code is safe and secure [38].
Nevertheless, it is hoped our study will prove helpful to higher education institutions and policymakers looking to develop new guidelines or enhance existing ones on the use of GenAI tools in learning and teaching.
Our study highlights the fact that current (fall 2024) university guidelines on using GenAI tools, while addressing ethical and responsible usage, fall short of meeting the broader educational needs associated with these tools. Major issues such as understanding AI algorithms and their limitations; academic integrity compliance; detection, reporting, and investigating academic integrity breaches; AI literacy; faculty development; and equity are often neglected or not comprehensively covered. Improving these guidelines alone, whilst important, will not be enough in the long run; completely rethinking the curriculum is becoming necessary. GenAI is challenging traditional teaching and assessment methods, and a fresh focus on critical thinking, creativity, and adaptability is urgently required. Thus, a comprehensive approach combining regularly updated guidelines and a curriculum overhaul is essential to prepare students and faculty for a future where AI plays a vital role in knowledge creation and application.

6. Study Limitations

Universities update their guidelines quite frequently in response to rapidly evolving AI technologies. Our study is based on the version of the guidelines retrieved from surveyed universities’ websites at the time we visited them. Any changes made to these guidelines after 30th August 2024 were not included in our study. Moreover, this study was limited to publicly accessible guidelines from the universities we surveyed at that time. We could not find public guidelines for nine universities when we visited their websites, but it is possible some (or all) of these universities had relevant guidelines exclusively posted on their campus intranets or sent to students and staff via email. Our assessment of the retrieved guidelines was also limited to the 24 predefined elements in our developed checklist instrument (Appendix A).

7. Conclusion

Using GenAI tools in teaching, learning, academic writing, and associated administrative duties can be helpful and time-saving, but not without potential problems if not properly deployed. Universities worldwide are recognizing this fact and releasing their own guidelines to ensure the responsible and ethical use of GenAI tools in education. All the university guidelines surveyed in this study permitted the use of GenAI tools in academic settings but underlined the importance of adequately addressing the associated data security and privacy issues, as well as tackling the ethical and legal implications of such uses. Moreover, the guidelines highlighted existing concerns over the reliability, neutrality, and currency of the information generated by LLMs. The importance of acknowledging the use of GenAI tools in academic works was also stressed. Our study recommends that every institution of higher education should establish its own detailed guidelines and policies for the ethical use of GenAI tools. The checklist created for this study may be useful in developing new guidelines or enhancing existing ones for the use of GenAI tools in universities. These guidelines must be frequently updated to stay in line with the fast-paced evolution of AI technologies and their application within the academic sphere. Furthermore, it is imperative for universities to enhance AI literacy and knowledge among their students and staff by advancing from merely providing guidelines to incorporating AI into curricula across all educational levels.

Author Contributions

MU conceived, designed, ran the study, conducted the literature review, analysed findings, and wrote the paper; SBN and MNKB provided critical input throughout and contributed to background literature review, interpretation of findings, and manuscript editing. All authors have read and approved the final version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable. The study did not involve any human subjects or data about specific individuals.

Informed Consent Statement

Not applicable. The study did not involve any human subjects or data about specific individuals.

Data Availability Statement

The core data supporting the findings of this study are available within the article; further details can be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Instrument for Assessing the Guidelines on the Use of Generative Artificial Intelligence Tools in Universities

Name of the university: _______________________________________________
Categories and Checklist
  A.General Information
S. No. Item Yes No.
1 Guidelines issuer authority
2 Release date
3 Objectives and scope of the guidelines
4 Introduction to GenAI tools
5 How does an AI algorithm work?
6 Examples of GenAI tools
7 Contact information for guidance
  B.Instructions on the Use of GenAI
8 GenAI tools usage permitted
9 Domains for GenAI tools utilization
10 Instances unsuitable for GenAI tools usage (limitations)
11 Instructor approval for GenAI utilization
12 Details of GenAI tool employed (description/name/version/date)
13 Purpose of utilizing GenAI tools
14 Details of the provided prompts to the GenAI tool
15 Documentation of GenAI tool outputs
16 Utilization and adaptation of GenAI output
17 Strategies for use in classrooms and assessments
  C.Instructions Regarding Ethical and Legal Issues
18 Data privacy and security
19 Evaluation and verification of GenAI outputs
20 Referencing and citing of GenAI outputs
21 Academic integrity and misconduct
22 Use of AI detection tools
23 Legal compliance
24 Reporting mechanisms

Appendix B. List of the World’s Top 50 Universities and Their Corresponding Guidelines URLs (as Accessed in July/August 2024)

University Name Guidelines URL
Massachusetts Institute of Technology (MIT), Cambridge, United States https://ist.mit.edu/ai-guidance
Imperial College London, London, United Kingdom https://www.imperial.ac.uk/admin-services/library/learning-support/generative-ai-guidance/
University of Oxford, Oxford, United Kingdom https://communications.admin.ox.ac.uk/communications-resources/ai-guidance
Harvard University, Cambridge, United States https://huit.harvard.edu/ai/guidelines
University of Cambridge, Cambridge, United Kingdom https://genai.uchicago.edu/about/generative-ai-guidance
Stanford University, Stanford, United States https://uit.stanford.edu/security/responsibleai
ETH Zurich, Zürich, Switzerland https://ethz.ch/en/the-eth-zurich/education/ai-in-education.html
National University of Singapore (NUS), Singapore https://ctlt.nus.edu.sg/wp-content/uploads/2024/08/Policy-for-Use-of-AI-in-Teaching-and-Learning.pdf
University College London (UCL), London, United Kingdom https://library-guides.ucl.ac.uk/generative-ai/acknowleding
California Institute of Technology (Caltech), Pasadena, United States https://www.imss.caltech.edu/services/collaboration-storage-backups/caltech-ai/guidelines-for-secure-and-ethical-use-of-artificial-intelligence-ai
University of Pennsylvania, Philadelphia, United States https://cetli.upenn.edu/resources/generative-ai-your-teaching/
University of California (UC), Berkeley, United States https://oercs.berkeley.edu/ai/appropriate-use-generative-ai-tools
The University of Melbourne, Parkville, Australia https://www.unimelb.edu.au/generative-ai-taskforce/resources
Peking University, Beijing, China No guidelines were found on the website https://english.pku.edu.cn/ as of August 30, 2024.
Nanyang Technological University, (NTU) Singapore https://www.ntu.edu.sg/research/resources/use-of-gai-in-research
Cornell University, Ithaca, United States https://it.cornell.edu/ai/ai-guidelines
The University of Hong Kong, Hong Kong, China https://innowings.engg.hku.hk/innowing1/aiguide/
The University of Sydney, Sydney, Australia https://www.sydney.edu.au/students/academic-integrity/artificial-intelligence.html
The University of New South Wales (UNSW), Sydney, Australia https://www.student.unsw.edu.au/notices/2024/05/ethical-and-responsible-use-artificial-intelligence-unsw
Tsinghua University, Beijing, China No guidelines were found on the website https://www.tsinghua.edu.cn/en/ as of August 30, 2024.
University of Chicago, Chicago, United States https://genai.uchicago.edu/about/generative-ai-guidance
Princeton University, Princeton, United States https://mcgraw.princeton.edu/generative-ai
Yale University, New Haven, United States https://provost.yale.edu/news/guidelines-use-generative-ai-tools
Université PSL, Paris, France No guidelines were found on the website https://psl.eu/en as of August 30, 2024.
University of Toronto, Toronto, Canada https://ai.utoronto.ca/guidelines/
École Polytechnique, fédérale de Lausanne (EPFL), Lausanne, Switzerland https://www.epfl.ch/about/vice-presidencies/vice-presidency-for-academic-affairs-vpa/tips-for-the-use-of-generative-ai-in-research-and-education/
The University of Edinburgh, Edinburgh, United Kingdom https://information-services.ed.ac.uk/computing/comms-and-collab/elm/guidance-for-working-with-generative-ai
Technical University of Munich (TUM), Munich, Germany No guidelines were found on the website https://www.tum.de/en/ as of August 30, 2024.
McGill University, Montreal, Canada https://www.mcgill.ca/stl/files/stl/stl_recommendations_2.pdf
Australian National University (ANU), Canberra, Australia https://learningandteaching.anu.edu.au/wp-content/uploads/2024/06/Chat_GPT_FAQ-1.pdf
Seoul National University, Seoul, South Korea No guidelines were found on the website https://en.snu.ac.kr/index.html as of August 30, 2024.
Johns Hopkins University, Baltimore, United States https://teaching.jhu.edu/university-teaching-policies/generative-ai/
The University of Tokyo, Tokyo, Japan https://utelecon.adm.u-tokyo.ac.jp/en/docs/ai-tools-in-classes
Columbia University, New York City, United States https://provost.columbia.edu/content/office-senior-vice-provost/ai-policy
The University of Manchester, Manchester, United Kingdom https://www.staffnet.manchester.ac.uk/dcmsr/communications/ai-guidelines/
The Chinese University of Hong Kong (CUHK), Hong Kong, China https://www.aqs.cuhk.edu.hk/documents/A-guide-for-students_use-of-AI-tools.pdf
Monash University, Melbourne, Australia https://www.monash.edu/graduate-research/support-and-resources/resources/guidance-on-generative-ai
University of British Columbia, Vancouver, Canada https://genai.ubc.ca/guidance/
Fudan University, Shanghai, China No guidelines were found on the website https://www.fudan.edu.cn/en/ as of August 30, 2024.
King’s College London, London, United Kingdom https://www.kcl.ac.uk/about/strategy/learning-and-teaching/ai-guidance/student-guidance
The University of Queensland, Brisbane City, Australia https://itali.uq.edu.au/teaching-guidance/teaching-learning-and-assessment-generative-ai
University of California, Los Angeles (UCLA), Los Angeles, United States https://genai.ucla.edu/guiding-principles-responsible-use
New York University (NYU), New York City, United States https://www.nyu.edu/faculty/teaching-and-learning-resources/Student-Learning-with-Generative-AI.html
University of Michigan-Ann Arbor, Ann Arbor, United States https://lsa.umich.edu/technology-services/services/learning-teaching-consulting/teaching-strategies/Guidelines-for-Using-Generative-Artificial-Intelligence.html
Shanghai Jiao Tong University, Shanghai, China https://global.sjtu.edu.cn/en/news/view/1520
Institut Polytechnique de Paris, Palaiseau Cedex, France No guidelines were found on the website https://www.ip-paris.fr/en as of August 30, 2024.
The Hong Kong University of Science and Technology, Hong Kong, China https://cei.hkust.edu.hk/en-hk/education-innovation/generative-ai-education
Zhejiang University, Hangzhou, China No guidelines were found on the website https://www.zju.edu.cn/english/ as of August 30, 2024.
Delft University of Technology, Delft, Netherlands https://hri-wiki.tudelft.nl/llm/rules-guidelines
Kyoto University, Kyoto, Japan No guidelines were found on the website https://www.kyoto-u.ac.jp/en as of August 30, 2024.

References

  1. Farrelly, T.; Baker, N. Generative artificial intelligence: Implications and considerations for higher education practice. Educ. Sci. 2023, 13(11), 1109. [CrossRef]
  2. Alier, M.; García-Peñalvo, F.J.; Camba, J.D. Generative artificial intelligence in education: From deceptive to disruptive. Int. J. Interact. Multimed. Artif. Intell. 2024, 8(5), 5-14. [CrossRef]
  3. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [CrossRef]
  4. Aljamaan, F.; Temsah, M.H.; Altamimi, I.; Al-Eyadhy, A,; Jamal, A.; Alhasan, K.; et al. Reference hallucination score for medical artificial intelligence chatbots: Development and usability study. JMIR Med. Inform. 2024, 12. e54345. [CrossRef]
  5. AlAli, R.; Wardat, Y. Opportunities and challenges of integrating generative artificial intelligence in education. Int. J. Relig. 2024, 5(7), 784-793. [CrossRef]
  6. Dogan, I.D.; Medvidović, L. Generative AI in education: Strategic forecasting and recommendations. In proceedings of 11th Higher Education Institutions Conference, Zagreb, Croatia, 21 – 22 September 2023; pp. 22-31. Available online: https://www.heic.hr/wp-content/uploads/2024/04/heic_proceedings_2023.pdf (accessed on 12 August 2024).
  7. Ghimire, A.; Edwards, J. From guidelines to governance: A study of ai policies in education. In proceedings of International Conference on Artificial Intelligence in Education, Recife, Brazil, 8–12 July 2024. [CrossRef]
  8. Driessens, O.; Pischetola, M. Danish university policies on generative AI: Problems, assumptions and sustainability blind spots. MedieKultur: J. Media Commun. Res. 2024, 40(76), 31-52. [CrossRef]
  9. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024 21, 21. [CrossRef]
  10. Liang, E.S.; Bai, S. Generative AI and the future of connectivist learning in higher education. J. Asian Public Policy. 2024, 1-23. [CrossRef]
  11. Miao, F.; Holmes, W. Guidance for generative AI in education and research. Paris: UNESCO Publishing; 2023. p. 44. [CrossRef]
  12. Duah, J.E.; McGivern, P. How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. Int. J. Inf. Learn. Technol. 2024; 41(2): 180-93. [CrossRef]
  13. Dabis, A.; Csáki, C. AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI. Humanit. Soc. Sci. Commun. 2024, 11, 1006. [CrossRef]
  14. Elsevier. Generative AI policies for journals: Elsevier; 2024. Available online: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals (accessed on 15 November 2024).
  15. Sage Publishing. Artificial intelligence policy: Sage; 2024. Available online: https://us.sagepub.com/en-us/nam/artificial-intelligence-policy (accessed on 15 November 2024).
  16. Springer International Publishing. Artificial Intelligence (AI): SpringerLink; 2024. Available online: https://www.springer.com/gp/editorial-policies/artificial-intelligence--ai-/25428500 (accessed on 15 November 2024).
  17. Russell Group. Russell Group principles on the use of generative AI tools in education. Cambridge: The Russell Group of Universities; 2023. Available online: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf (accessed on 15 June 2024).
  18. Moorhouse, B.L.; Yeo, M.A.; Wan, Y. Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Comput. Educ. Open. 2023, 5, 100151. [CrossRef]
  19. McDonald, N.; Johri, A.; Ali, A.; Hingle, A. Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Computers and Society: Artificial Intelligence. 2024. arXiv preprint arXiv, 240201659. [CrossRef]
  20. Cacho, R.M. Integrating generative AI in university teaching and learning: A model for balanced guidelines. Online Learn. J. 2024, 28(3), 55-81. [CrossRef]
  21. Smith, S.; Tate, M.; Freeman, K.; Walsh, A.; Ballsun-Stanton, B.; Hooper, M.; Lane, M. A university framework for the responsible use of generative AI in research. Computers and Society: Artificial Intelligence. 2024. [CrossRef]
  22. Quacquarelli Symonds. QS World University ranking 2024. London: QS Quacquarelli Symonds Limited; 2024. Available online: https://www.topuniversities.com/qs-world-university-ranking (accessed on 10 June 2024).
  23. Ganjavi, C.; Eppler, M.; O’Brien, D.; Ramacciotti, L.S.; Ghauri, M.S.; Anderson, I.; et al. ChatGPT and large language models (LLMs) awareness and use. A prospective cross-sectional survey of U.S. medical students. PLOS Digit. Health. 2024; 3(9), e0000596. [CrossRef]
  24. Benke, E.; Szőke, A. Academic integrity in the time of artificial intelligence: Exploring student attitudes. Ital. J. Sociol. Educ. 2024; 16(2), 91-108. [CrossRef]
  25. The University of Edinburgh. Guidance for working with Generative AI (“GenAI”) in your studies. Edinburgh: The University of Edinburgh; 2024. Available online: https://information-services.ed.ac.uk/computing/comms-and-collab/elm/guidance-for-working-with-generative-ai (accessed on 12 November 2024).
  26. The University of Edinburgh. Generative AI guidance for staff. Edinburgh: The University of Edinburgh; 2024. Available online: https://information-services.ed.ac.uk/computing/comms-and-collab/elm/generative-ai-guidance-for-staff (accessed on 12 November 2024).
  27. The University of Edinburgh. ELM - (Edinburgh (access to) language models) Edinburgh: The University of Edinburgh; 2024. Available online: https://information-services.ed.ac.uk/computing/comms-and-collab/elm (accessed on 12 November 2024).
  28. Lee, D.; Arnold, M.; Srivastava, A.; Plastow, K.; Strelan, P.; Ploeckl, F.; et al. The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Comput. Educ.: Artif. Intell. 2024; 6, 100221. [CrossRef]
  29. Ardito, C.G. Generative AI detection in higher education assessments. New Dir. Teach. Learn. 2024; early view. [CrossRef]
  30. Gehrman, E. How generative AI is transforming medical education: Harvard Medical School is building artificial intelligence into the curriculum to train the next generation of doctors. Harvard Medicine. 2024. Available online: https://magazine.hms.harvard.edu/articles/how-generative-ai-transforming-medical-education (accessed on 12 November 2024).
  31. Park, S.H.; Suh, C.H., Lee, J.H.; Kahn, C.E.; Moy, L. Minimum reporting items for clear evaluation of accuracy reports of large language models in healthcare (MI-CLEAR-LLM). Korean J. Radiol. 2024; 25(10), 865-8. [CrossRef]
  32. Kirova, V.D.; Ku; C.; Laracy, J.; Marlowe, T. The ethics of artificial intelligence in the era of generative AI. J. Syst. Cybern. Informatics. 2023; 21(4), 42-50. [CrossRef]
  33. Edwards, B. Why AI detectors think the US Constitution was written by AI.: Ars Technica; 2023, July 14. Available online: https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/ (accessed on 12 November 2024).
  34. Fowler, G.A. We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. 2023, April 14. Available online: https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/ (accessed on 12 November 2024).
  35. Dathathri, S.; See, A.; Ghaisas, S.; Huang, P.S.; McAdam, R.; Welb. J.; et al. Scalable watermarking for identifying large language model outputs. Nature 2024; 634, 818-823. [CrossRef]
  36. Silva, G.S.; Khera, R.; Schwamm, L.H. Reviewer experience detecting and judging human versus artificial intelligence content: The Stroke journal essay contest. Stroke 2024; 55(10), 2573-8. [CrossRef]
  37. McCoy, L.G.; Ng, F.Y.C.; Sauer, C.M.; Legaspi, K.E.Y.; Jain, B.; Gallifant, J.; et al. Understanding and training for the impact of large language models and artificial intelligence in healthcare practice: A narrative review. BMC Med. Educ. 2024; 24(1), 1096. [CrossRef]
  38. Caballar, R.D. AI Copilots are changing how coding is taught. IEEE Spectrum. 2024. Available online: https://spectrum.ieee.org/ai-coding (accessed on 12 November 2024).
Figure 1. Flowchart illustrating the methodology employed in this study.
Figure 1. Flowchart illustrating the methodology employed in this study.
Preprints 141558 g001
Figure 2. Screenshot showing excerpts from the University of Edinburgh online guidelines entitled “Guidance for working with Generative AI (“GenAI”) in your studies — GenAI guidance for students” [25]. It is worth noting that the University of Edinburgh has a separate set of GenAI guidelines for its staff [26]. The complete set of guidance for students and staff can be conveniently accessed from a dedicated hub [27].
Figure 2. Screenshot showing excerpts from the University of Edinburgh online guidelines entitled “Guidance for working with Generative AI (“GenAI”) in your studies — GenAI guidance for students” [25]. It is worth noting that the University of Edinburgh has a separate set of GenAI guidelines for its staff [26]. The complete set of guidance for students and staff can be conveniently accessed from a dedicated hub [27].
Preprints 141558 g002
Table 1. Summary of the main milestones of the literature review.
Table 1. Summary of the main milestones of the literature review.
Author(s) Year Focus/Objectives
Russell Group (UK) 2023 Developed a set of principles to guide the use of GenAI tools in higher education institutions.
Miao and Holmes 2023 Presented UNESCO’s global guidance on the use of GenAI tools in education and research.
Chan 2023 Surveyed teachers, students, and support staff across universities in Hong Kong to propose an “AI Ecological Education Policy Framework”.
Moorhouse, Yeo and Wan 2023 Reviewed the guidelines regarding the use of GenAI tools in assessments from the top 50 universities around the world.
Farrelly and Baker 2023 Explored the various ways in which Generative AI influences academic work, with a particular focus on its effects on international students.
Dogan and Medvidović 2024 Reviewed the literature on GenAI to provide recommendations for developing a strategy to incorporate AI into education.
Ghimire and Edwards 2024 Conducted a survey of heads of high schools and higher education institutions in the USA to examine the status of policies related to the use of GenAI in education.
McDonald, Johri, Ali and Hingle 2024 Reviewed GenAI-related guidelines established by 116 universities in the United States.
Dabis and Csáki 2024 Analysed policy documents and guidelines from 30 leading universities around the globe.
Driessens and Pischetola 2024 Evaluated GenAI-related policies of eight universities in Denmark.
Cacho 2024 Suggested a framework for integrating GenAI into teaching and learning at higher education institutions.
Yusuf, Pervin and Román-González 2024 Conducted a survey involving 1,217 students and teachers from 76 countries to explore the use, benefits, and concerns associated with GenAI in higher education.
Smith et al. 2024 Proposed a framework to encourage and facilitate the responsible use of GenAI in research.
Table 2. Presence (yes/no) of general information items in the retrieved guidelines (n=41).
Table 2. Presence (yes/no) of general information items in the retrieved guidelines (n=41).
Rank Item Yes No
1 Introduction to GenAI tools 37 (90%) 4 (10%)
2 Guidelines issuing authority 36 (88%) 5 (12%)
3 Examples of GenAI tools 34 (83%) 7 (17%)
4 Objectives and scope of the guidelines 31 (76%) 10 (24%)
5 Contact information for guidance 24 (59%) 17 (41%)
6 Release/update date 22 (54%) 19 (46%)
7 How does an AI algorithm work? 8 (20%) 33 (80%)
Table 3. Presence (yes/no) of items covering instructions on the use of GenAI tools in the retrieved guidelines (n=41).
Table 3. Presence (yes/no) of items covering instructions on the use of GenAI tools in the retrieved guidelines (n=41).
Rank Item Yes No
1 GenAI tools usage permitted 41(100%) 0(0%)
2 Instances unsuitable for GenAI tools usage (limitations) 35(85%) 6(15%)
3 Instructor approval for GenAI utilization 31(76%) 10(24%)
4-5 Domains for GenAI tools utilization 29(71%) 12(29%)
4-5 Strategies for use in classrooms and assessments 29(71%) 12(29%)
6 Details of GenAI tool employed (description/name/version/date) 26(63%) 15(37%)
7 Utilization and adaptation of GenAI output 23(56%) 18(44%)
8 Purpose of utilizing GenAI tools 15(37%) 26(63%)
9-10 Details of the provided prompts to the GenAI tool 13(32%) 28(68%)
9-10 Documentation of GenAI tool outputs 13(32%) 28(68%)
Table 4. Presence (yes/no) of items covering legal and ethical issues in the retrieved guidelines (n=41).
Table 4. Presence (yes/no) of items covering legal and ethical issues in the retrieved guidelines (n=41).
Rank Item Yes No
1 Evaluation and verification of GenAI outputs 38 (93%) 3 (7%)
2 Academic integrity and misconduct 36 (88%) 5 (12%)
3 Data privacy and security 35 (85%) 6 (15%)
4 Legal compliance 24 (59%) 17 (41%)
5 Referencing and citing of GenAI outputs 23 (56%) 18 (44%)
6 Use of AI detection tools 17 (41%) 24 (59%)
7 Reporting mechanisms 14 (34%) 27 (66%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated