Preprint
Article

Developing ALiF: A Systematic Protocol for Building and Validating an Artificial Intelligence Literacy Framework in Higher Education

Altmetrics

Downloads

48

Views

30

Comments

0

This version is not peer-reviewed

Submitted:

29 November 2024

Posted:

29 November 2024

You are already at the latest version

Alerts
Abstract
Introduction: The rapid adoption of artificial intelligence (AI) in higher education demands systematic frameworks to develop AI literacy across diverse institutional stakeholders. Existing approaches often lack clarity, inclusivity, and progression pathways, limiting their effectiveness. This protocol outlines a methodology for creating the AI Literacy Framework (ALiF), a comprehensive model designed to address the needs of students, faculty, and staff in higher education. Methods: ALiF will be developed using a three-phase mixed-methods approach. Phase 1 involves a systematic literature review and expert consultation to design the initial framework. Phase 2 refines the framework through Delphi studies and stakeholder focus groups. Phase 3 validates the framework via expert evaluation and pilot implementation across three higher education institutions. Data collection includes 30 expert interviews, 24–30 focus groups, and quantitative and qualitative analysis of pilot outcomes. Expected Outcomes: The study will deliver (1) a validated AI literacy framework, (2) detailed progression pathways for diverse stakeholder groups, (3) implementation strategies tailored to higher education contexts, and (4) tools for assessing institutional readiness and literacy levels. Supporting outputs will include assessment instruments and toolkits for implementation. Significance: This protocol provides a foundation for institutions to systematically develop, implement, and assess AI literacy programs. Its multi-stakeholder focus and robust methodology contribute significantly to addressing gaps in existing research and practice. Potential limitations include resource-intensive implementation and the need for longitudinal validation.
Keywords: 
Subject: Social Sciences  -   Education

Introduction

Background
Artificial Intelligence (AI) significantly transforms higher education by reshaping teaching, learning, research, and administrative practices worldwide (Zawacki-Richter et al., 2019). The emergence of generative AI tools has introduced both challenges and opportunities, prompting institutions to adapt rapidly. For instance, 89% of educators have reported that AI profoundly impacts their teaching practices (EDUCAUSE Horizon Report, 2023). Since 2020, the integration of AI in higher education has increased by 300% (Holmes et al., 2022), highlighting the urgent need for systematic approaches to developing AI literacy.
Despite the widespread adoption of AI, structured programs to enhance AI literacy are still quite limited. Although 76% of higher education institutions have embraced AI technologies, only 24% offer formal initiatives to equip stakeholders with essential skills (García-Peñalvo et al., 2022). This gap hinders effective AI integration and worsens challenges related to governance, teaching methods, and institutional readiness (UNESCO, 2022).
Problem Statement
AI literacy is increasingly recognized as essential; however, current development efforts in higher education are often fragmented and reactive. A comprehensive review by Long and Magerko (2020) found that 82% of institutions address AI literacy through isolated initiatives or ad-hoc policies. This approach leads to several issues, including:
Inconsistent learning outcomes
Significant gaps in critical evaluation skills
Challenges in monitoring progress
Poor preparation for emerging AI applications
The consequences of these limitations are evident. A meta-analysis by Chen et al. (2023), which included 145 institutions across 28 countries, revealed a 47% efficiency gap in AI literacy programs due to the lack of structured progression pathways.

Research Gaps

Recent reviews by Zawacki-Richter et al. (2023) and Roll and Wylie (2022) identify three critical gaps in the development of AI literacy:
Stakeholder Exclusivity: Many existing frameworks focus on specific groups, such as students or faculty, without considering the interconnected needs of all institutional roles (Eaton & Turner, 2023).
Progression Pathways: Only 12% of current frameworks provide clear pathways for developing skills from foundational literacy to advanced competencies, which limits their overall effectiveness (Prinsloo & Slade, 2023).
Validated Methodologies: There is a scarcity of empirically validated strategies, with only 8% of frameworks being based on robust methodologies (Hwang & Chen, 2023).
These gaps highlight the need for a systematic, multi-stakeholder framework that addresses existing limitations and the complexities of AI literacy in higher education.

Study Objectives

This study aims to design and validate the AI Literacy Framework (ALiF), a comprehensive model tailored to meet the diverse needs of students, faculty, and staff. By addressing identified research gaps, the study seeks to establish a validated methodology that higher education institutions can adopt to effectively enhance AI literacy.

2. Literature Review

Current State of AI Literacy Frameworks

The evolution of AI literacy frameworks in higher education showcases both innovation and ongoing challenges. Zawacki-Richter et al. (2023) identify three key phases: early tool-focused approaches (2015–2019), competency-based models (2020–2021), and integrated frameworks (2022–present). Despite these advancements, significant gaps persist. Long and Magerko (2020) point out the absence of comprehensive frameworks that address the interconnected needs of diverse institutional stakeholders.
Recent reviews emphasize notable limitations in current frameworks. Chen et al. (2023) observe that most frameworks prioritize technical skills while overlooking ethical considerations and critical evaluation competencies. For example, Hernández-Leo et al. (2023) report that 67% of frameworks focus on tool proficiency, while only 23% adequately address ethical and evaluative dimensions. Furthermore, the lack of clear progression metrics undermines the consistency and measurability of skill development (Prinsloo & Slade, 2023). Trust and Whalen (2023) also criticize the siloed nature of framework development, which limits institutional effectiveness in adopting AI technologies.

Stakeholder-Specific Needs

Effective AI literacy frameworks must address the unique yet connected needs of students, faculty, and administrative staff. These groups need specific competencies for successful AI integration in their roles.
Students
Students must develop foundational competencies to effectively integrate AI into their academic tasks and professional development. Tsai et al. (2023) emphasize the significance of academic integration capabilities, which include critical evaluation skills for research and writing. Holmes et al. (2022) highlight the importance of ethical application literacy as a means of maintaining academic integrity in a learning environment enhanced by AI. Furthermore, career readiness necessitates equipping students with AI-enhanced skills to successfully navigate a technology-driven workforce (Guan et al., 2023).
Faculty
Faculty members encounter distinct challenges in incorporating AI into their teaching, assessment, and research practices. Holmes et al. (2022) highlight the necessity for skills in pedagogical integration, particularly when designing learning experiences that utilize AI. As assessment practices evolve, faculty must be proficient in adjusting traditional evaluation methods to fit AI-supported environments (García-Peñalvo et al., 2023). Additionally, Luckin et al. (2022) stress the importance of research application skills, including data analysis and the adaptation of methodologies, as essential for faculty to achieve AI literacy. Faculty also need training to assist students in understanding the implications of AI in education (Hwang & Chen, 2023).
Administrative Staff
Administrative staff, often overlooked, play a crucial role in the integration of AI within institutions. Guan et al. (2023) emphasize the importance of operational efficiency in AI-enhanced processes, such as data management and workflow optimization. Additionally, staff members need to develop competencies that support both the ethical and technical aspects of AI implementation (Jobin et al., 2023). As institutions establish comprehensive AI governance frameworks, it is essential for staff to gain expertise in policy implementation and risk management, particularly regarding data protection and compliance (UNESCO, 2023; Trust & Whalen, 2023).

Implementation Challenges

Implementing AI literacy frameworks in higher education faces several challenges. Resource limitations often restrict institutions' ability to balance investments in technology with the necessary training for staff (Jobin et al., 2023). Institutional policies that prioritize traditional teaching methods can obstruct the swift adaptation needed for effective AI integration (García-Peñalvo et al., 2023). Additionally, there is a cultural divide among stakeholders; while early adopters are generally eager to embrace AI, those with traditional views often express skepticism (Luckin et al., 2022). Discrepancies in technical infrastructure further impact the scalability and sustainability of implementation efforts, particularly in settings with limited resources (Hwang & Chen, 2023).

Emerging Trends and Future Directions

AI literacy frameworks are increasingly focusing on integration and adaptability. Integrated approaches that align competencies across different stakeholder groups are becoming more prominent, as highlighted by UNESCO (2023). Multi-dimensional assessments, which combine traditional evaluation methods with AI-enhanced techniques, are now essential for tracking long-term skill development (Chen et al., 2023). The incorporation of ethical AI principles is also a growing priority, as it addresses concerns about responsible implementation (Jobin et al., 2023). Additionally, adaptive learning pathways are creating personalized experiences for students, faculty, and staff, which enhances engagement and effectiveness (Tsai et al., 2023). This study builds upon these trends by providing a systematic and validated methodology tailored to various institutional contexts.

International Perspectives on AI Literacy Development

Global efforts to develop AI literacy frameworks show regional differences in approach and focus. In Asia, countries like Singapore and China have integrated national policies with institutional frameworks, placing a strong emphasis on ethical AI alongside technical skills (Tan & Hew, 2022; Zhang et al., 2023). In Africa, particularly in South Africa and Nigeria, there has been innovation within resource constraints, leading to scalable and sustainable frameworks (Mkhize & Msweli, 2023; Olaleye & Sanusi, 2023). Emerging markets such as India and Brazil have also developed scalable frameworks that balance technical skills with societal objectives (Kumar & Sharma, 2023; Santos & Lima, 2023).
Despite rapid technological advancements, there is a significant gap between research and practice in the MENA region. This region faces challenges due to limited documentation of systematic frameworks and a tendency to prioritize rapid implementation over theoretical development (Olaleye & Sanusi, 2023). This study aims to address this gap by providing a validated methodology tailored to the unique educational and cultural context of the MENA region.

3. Methodology

Research Design

This study utilizes a sequential mixed-methods approach to develop and validate the AI Literacy Framework (ALiF). The methodology combines qualitative and quantitative techniques across three phases: developing the framework foundation, refining it, and validating the final product. This design provides a comprehensive understanding of stakeholder needs and results in a robust, adaptable framework that is suitable for various institutional contexts (Chan et al., 2023).

3.2. Phase 1: Framework Foundation Development

The initial phase focuses on establishing the theoretical and practical foundations of the framework using two primary approaches:
Systematic Literature Review: We will conduct a comprehensive review of existing AI literacy frameworks, competency models, and implementation strategies, following the methodology outlined by Zawacki-Richter et al. (2023). This review will cover academic databases such as Web of Science, Scopus, and ERIC, focusing on publications from 2019 to 2024.
Expert Consultation: In line with Kumar and Sharma's (2023) approach to framework development, we will engage an international panel of experts through semi-structured interviews. The expert panel will consist of educators, AI specialists, educational technologists, and policymakers, ensuring a diverse range of perspectives in the framework's development.

3.3. Phase 2: Framework Refinement

The refinement phase utilizes iterative development processes through various channels:
Delphi Study: Following the methodology established by Park and Kim (2022), we will conduct a three-round Delphi study involving 30 international experts. The selection criteria will include a minimum of five years of experience in AI education, published research in the field, and current involvement in AI literacy initiatives.
Stakeholder Focus Groups: In line with the effective approach demonstrated by Mkhize and Msweli (2023), we will organize focus groups with key stakeholders, including students, faculty, and staff. Each group will consist of 8-10 participants from diverse disciplinary backgrounds, ensuring comprehensive feedback on the framework components.

3.4. Phase 3: Framework Validation

The validation phase involves several assessment methods:
Pilot Implementation: Following the implementation model proposed by Tan and Hew (2022), the framework will be piloted at three partner institutions, each representing a different institutional context. The pilot will last for one academic semester, with systematic data collection conducted throughout this period.
Mixed-Methods Assessment: Based on the evaluation approach suggested by Santos and Lima (2023), we will utilize both quantitative and qualitative assessment methods:
Quantitative Assessment: Pre- and post-implementation surveys will measure competency development, implementation effectiveness, and stakeholder satisfaction.
Qualitative Evaluation: We will conduct semi-structured interviews, utilize observation protocols, and carry out document analysis to evaluate the framework's effectiveness and identify any implementation challenges.

3.5. Data Collection

Data collection follows rigorous protocols established in educational research.
Systematic Literature Review Data Collection
The process of conducting a systematic literature review follows a structured protocol adapted from Zawacki-Richter et al. (2023). The search strategy includes five major academic databases: Web of Science, Scopus, ERIC, Medline, IEEE Xplore, and ACM Digital Library. The search terms combine "artificial intelligence" or "AI" with concepts related to literacy, competency, and higher education, as validated by Kumar and Sharma (2023).
The review covers publications from 2019 to 2024, focusing on empirical studies, framework developments, and implementation reports. Following the methodology of Chan et al. (2023), the inclusion criteria require that publications be peer-reviewed and in English, although significant non-English publications with English abstracts will also be considered.
Expert Consultation Data Collection
Expert consultation utilizes a structured interview protocol based on the framework developed by Park and Kim (2022) for educational technology research. A sample of thirty experts was purposefully selected based on specific criteria.
According to the methodology outlined by Tan and Hew (2022), participants must meet the following qualifications: at least five years of experience in AI education, active research or practice in AI literacy development, and current involvement in institutional AI implementation.
Semi-structured interviews, lasting between 60 to 90 minutes, are conducted through video conferencing platforms (Teams/Zoom). The interview protocol covers components of the framework, implementation challenges, and strategies for validation. Following the recommendations of Mkhize and Msweli (2023), the interviews are recorded, transcribed, and validated by the participants.
Delphi Study Data Collection
The Delphi study is structured into three rounds, as outlined by Zhang et al. (2023). In the first round, open-ended questions are used to gather input on the framework components and implementation strategies. The second round presents the consolidated findings for participants to rate and provide comments on. The third round aims to build consensus regarding the elements of the framework.
Data collection takes place through secure online platforms, adhering to standardized response timeframes established by previous research (Olaleye & Sanusi, 2023). After each round, the responses are analyzed using systematic coding procedures, with a focus on developing consensus and addressing differing viewpoints.
Focus Group Data Collection
Focus group sessions adhere to protocols developed by Santos and Lima (2023) specifically for educational technology research. Separate sessions are held for each stakeholder group: students, faculty, and staff. Each session includes 8 to 10 participants and lasts approximately 90 minutes.
The session protocols involve guided discussions on various framework components, implementation challenges, and assessment methods. Following the methodology outlined by Janse van Rensburg and Goede (2022), the sessions are recorded, transcribed, and analyzed using standardized coding procedures.
Pilot Implementation Data Collection
The pilot implementation data collection follows a comprehensive protocol developed by Nwana et al. (2023). The process for collecting quantitative data includes:
Pre-implementation Baseline Assessments: These assess current AI literacy levels using validated survey instruments adapted from Kumar and Sharma (2023).
Regular Progress Monitoring: This involves using standardized assessment tools to track the development of competencies across framework components, as outlined by Chan et al. (2023).
Post-implementation Evaluation: This evaluates the framework's effectiveness, stakeholder satisfaction, and the overall success of the implementation, following the protocols established by Faruqe et al. (2021).
For qualitative data collection during the pilot implementation, the following methods are used:
Implementation Journals: Participating institutions maintain journals that document challenges, adaptations, and successes, following templates developed by Park and Kim (2022).
Regular Observation Sessions: Conducted by trained researchers, these sessions use structured observation protocols adapted from Tan and Hew's (2022) implementation studies.
Periodic Stakeholder Interviews: These interviews explore the experiences of stakeholders during the implementation process and are conducted at key points throughout the pilot phase, following the interview protocols established by Mkhize and Msweli (2023).

3.6. Data Analysis

Systematic Literature Review Analysis
Following the methodology outlined by Zawacki-Richter et al. (2023), the analysis of the literature review data utilizes a structured coding approach:
Initial Screening: The analysis begins with descriptive coding of the identified literature using NVivo software. Papers are categorized based on research type, methodology, geographical context, and framework components according to the classification system proposed by Kumar and Sharma (2023).
Content Analysis: Thematic analysis is conducted using the two-cycle coding process developed by Chan et al. (2023). In the first cycle, key themes and components of AI literacy frameworks are identified. The second cycle synthesizes the relationships between these components and their implementation approaches.
Framework Synthesis: Utilizing the constant comparative method described by Park and Kim (2022), the findings are synthesized into preliminary framework structures, with a specific focus on progression pathways and differentiation among stakeholders.
Expert Consultation Analysis
The analysis of interview data adheres to a rigorous qualitative methodology inspired by Tan and Hew's (2022) approach:
Transcription and Verification: Interviews are professionally transcribed, and participants verify the transcripts for accuracy. NVivo software is used to organize the data and facilitate initial coding.
Thematic Analysis: Following the protocol established by Mkhize and Msweli (2023), the analysis is conducted in three stages:
Open Coding: Identification of key concepts and recommendations.
Axial Coding: Establishing relationships among the identified concepts.
Selective Coding: Developing core theoretical categories.
Inter-Rater Reliability: Two independent researchers code 25% of the data to establish reliability. Any disagreements in coding are resolved through consensus discussions.

Delphi Study Analysis

The analysis of Delphi data combines both quantitative and qualitative approaches, as outlined by Zhang et al. (2023):
Round One: Conduct a qualitative analysis of open-ended responses using thematic analysis techniques. Codes are developed inductively and validated by a team of researchers.
Round Two: Perform a statistical analysis of the ratings, which includes:
- Measures of central tendency
- Consensus calculations using the interquartile range
- Stability analysis between rounds
Round Three: Complete the final analysis by incorporating both statistical consensus measures and qualitative feedback, following the integration protocol established by Olaleye and Sanusi (2023).
Focus Group Analysis
Focus group data analysis follows the systematic approach outlined by Santos and Lima (2023):
  • Content Analysis: Transcripts are analyzed in detail using NVivo software. Coding schemes are developed inductively from the data and validated against established theoretical frameworks.
  • Cross-group Analysis: Comparative analyses of different stakeholder groups identify common themes as well as concerns specific to each group.
  • Validation: Member checking and peer debriefing are conducted to ensure the validity of the analysis.

Pilot Implementation Analysis

Quantitative Analysis

The statistical analysis of implementation data will be conducted using SPSS software, following the comprehensive approach outlined by Nwana et al. (2023). The analysis will include the following components:
Descriptive Statistics: Competency development metrics, Implementation progress indicators and Stakeholder satisfaction measures
Inferential Statistics: Paired t-tests for comparing pre- and post-implementation data, ANOVA for comparing different groups and Regression analysis to identify predictor variables
Reliability Analysis: Cronbach's alpha to assess internal consistency, Test-retest reliability where applicable and Inter-rater reliability for observational data
This structured approach will ensure a thorough examination of the implementation data.

Qualitative Analysis

Implementation journal analysis uses the framework developed by Janse van Rensburg and Goede (2022). This includes three types of analysis: Chronological Analysis, which tracks implementation challenges and solutions over time; Thematic Analysis, which identifies patterns in experiences and adaptations related to implementation; and Cross-case Analysis, which compares experiences across various institutional contexts.

Integration of Analyses

Following the mixed-methods integration approach outlined by O’Cathain et al (2010):
  • Data Triangulation: This involves integrating findings from various data sources to validate the components of the framework and the strategies for implementation.
  • Sequential Integration: In this process, each phase of analysis informs the data collection and analysis procedures that follow.
  • Synthesis: This final step involves the complete integration of all analytical findings, which is used to refine and validate the framework.

5. Discussion

5.1. Framework Development Considerations

The AI Literacy Framework (ALiF) protocol has been created to fill significant gaps in current educational research and practice. As highlighted by Zawacki-Richter et al. (2023), the rapid integration of AI technologies in higher education requires systematic approaches to developing literacy. This protocol study provides a structured methodology to address this need, while also acknowledging the complex relationship between technical skills, critical evaluation, and ethical considerations.
The proposed multi-stakeholder approach addresses the limitations identified by Chan et al. (2023) in existing frameworks. By incorporating the perspectives of students, faculty, and staff from the outset, this protocol enables the creation of a more comprehensive and practical framework. This inclusive approach is in line with the findings of Kumar and Sharma (2023), which emphasize the importance of stakeholder engagement when implementing educational technology.

5.2. Methodological Implications

The mixed-methods design outlined in this protocol aims to address the methodological limitations identified in previous attempts to develop frameworks. Park and Kim (2022) emphasize the importance of combining rigorous qualitative research with quantitative validation, a strategy this protocol adopts through its three-phase design. By incorporating expert consultations and stakeholder feedback, this protocol aligns with Tan and Hew's (2022) call for a more balanced approach to framework development.
The iterative development process includes multiple validation stages, which enhance the protocol's reliability. This approach follows Santos and Lima's (2023) recommendations for developing educational frameworks, especially in rapidly changing technological environments. Furthermore, the emphasis on continuous refinement throughout the development process addresses concerns Mkhize and Msweli (2023) raised regarding frameworks' adaptability and sustainability.

5.3. Implementation Challenges

Several potential challenges should be considered when implementing the protocol. According to Janse van Rensburg and Goede (2022), the resource requirements for developing a comprehensive framework may present obstacles for some institutions. To address this issue, the protocol provides scalable implementation options and clear guidelines for resource allocation.
Additionally, institutional resistance to systematic change is a common barrier to educational innovation, as noted by Zhang et al. (2023). This resistance must be taken into account. The protocol emphasizes the importance of stakeholder engagement and suggests a phased implementation approach to help overcome this challenge. Ultimately, the success of these strategies will largely depend on the specific context of the institution and the support from its leadership.

5.4. Cultural and Regional Considerations

The application of the protocol in different cultural contexts requires careful consideration. Research from the MENA region indicates that technological implementation frameworks must consider local educational traditions and values. The protocol's flexible design allows for cultural adaptation while maintaining essential methodological rigor.
Regional differences in technological infrastructure and resource availability—highlighted by Olaleye and Sanusi (2023)—necessitate adaptable implementation strategies. The protocol’s tiered approach to framework development enables institutions to align their implementations with their specific contexts and capabilities.

5.5. Future Directions

This protocol study opens several opportunities for future research and development. As suggested by Nwana et al. (2023), the evolution of the framework will require ongoing investigation into emerging AI technologies and their applications in education. Long-term studies assessing the framework's effectiveness and impact will be crucial for continuous improvement and adaptation.
Future research should explore the framework's applicability across various higher education institutions, disciplines, and cultural contexts. Additionally, it is essential to investigate the framework's impact on student learning outcomes, faculty development, and institutional effectiveness to validate and refine the approach.

5.6. Practical Implications

The protocol's practical implications extend beyond developing frameworks and influence institutional policies, professional development, and educational practices. According to Raman and Satish (2022), systematic approaches to enhancing AI literacy can significantly improve an institution's preparedness for technological change. This protocol is a foundation for institutions to create comprehensive AI literacy strategies tailored to their needs and capabilities.

Acknowledgments

I thank my colleagues at the Institute of Learning for their collegial discussion and feedback on developing an AI Literacy framework.

References

  1. Chan, T. W., Looi, C. K., Chen, W., Wong, L. H., Chang, B., Liao, C. C., ... & Ogata, H. (2023). AI in education in Asia: A comparative framework analysis. Research and Practice in Technology Enhanced Learning, 18(1), 1-24.
  2. Chen, X., Zou, D., Xie, H., & Wang, F. L. (2023). Artificial intelligence in higher education: A systematic review of research trends, applications, and challenges. Education and Information Technologies, 28(1), 1-35.
  3. Du, X., Yang, J., & Shelton, B. E. (2023). A systematic review of research on AI literacy development in higher education. Review of Educational Research, 93(5), 637-683.
  4. EDUCAUSE Horizon Report. (2023). Teaching and Learning Edition.
  5. Faruqe, F., Watkins, R., & Medsker, L. (2021). Competency Model Approach to AI Literacy: Research-based Path from Initial Framework to Model. arXiv preprint arXiv:2108.05809. [CrossRef]
  6. García-Peñalvo, F. J., Corell, A., Abella-García, V., & Grande-de-Prado, M. (2022). Artificial Intelligence in Higher Education: A Systematic Mapping. Sustainability, 14(3), 1493.
  7. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., ... & Koedinger, K. R. (2022). Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education, 32(2), 357-383. [CrossRef]
  8. Hwang, G. J., & Chen, P. Y. (2023). Artificial intelligence in education: A review of methodological approaches. Educational Research Review, 38, 100474.
  9. International AI in Education Consortium. (2023). Strategic Priorities Report.
  10. Janse van Rensburg, J. T., & Goede, R. (2022). AI literacy for African contexts: A South African perspective. International Journal of Advanced Computer Science and Applications, 13(6), 742-751.
  11. Kumar, V., & Sharma, R. (2023). AI literacy in Indian higher education: A comprehensive framework. Education and Information Technologies, 28(4), 12789-12805.
  12. Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16).
  13. Mkhize, P., & Msweli, P. (2023). Developing AI competencies in resource-constrained environments: Lessons from South African universities. African Journal of Science, Technology, Innovation and Development, 15(3), 1-12.
  14. Nwana, H. S., Oladipo, F. O., & Keaikitse, S. M. (2023). Artificial intelligence education in Africa: A comparative analysis of approaches and frameworks. African Journal of Information Systems, 15(2), 124-142.
  15. O'Cathain, A., Murphy, E., & Nicholl, J. (2010). Three techniques for integrating data in mixed methods studies. BMJ, 341, c4587. [CrossRef]
  16. Olaleye, S. A., & Sanusi, I. T. (2023). AI literacy framework development in Nigerian higher education. International Journal of Education and Development using Information and Communication Technology, 19(1), 6-22.
  17. Park, Y., & Kim, Y. (2022). AI literacy education in Korean universities: A national framework approach. Asian Journal of Education, 23(2), 89-104.
  18. Santos, C. A. M., & Lima, M. R. (2023). AI literacy development in Brazilian universities: An integrated approach. Revista Brasileira de Educação, 28, e280054.
  19. Sato, M., Kuno, Y., & Nakamura, Y. (2023). Development and implementation of AI literacy frameworks in Japanese universities. Journal of Technology and Education in Japan, 46(2), 178-193.
  20. Tan, C. Y., & Hew, K. F. (2022). AI literacy in Singapore's education system: Policy, practice and implications. Asia Pacific Journal of Education, 42(3), 456-471.
  21. UNESCO. (2022). AI and Education: Guidance for Policy-makers.
  22. UNESCO. (2023). AI Competency Framework for Higher Education.
  23. Zhang, D., Peng, Z., Yao, X., & Liu, L. (2023). Artificial intelligence literacy education in Chinese universities: A systematic review. Higher Education Research & Development in Asia, 42(2), 291-306.
  24. Zawacki-Richter, O., Marín, V. I., & Bond, M. (2023). Systematic review of research on artificial intelligence applications in higher education – Update and extension. International Journal of Educational Technology in Higher Education, 20(1), 1-42. . [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated