Introduction
Background
Artificial Intelligence (AI) significantly transforms higher education by reshaping teaching, learning, research, and administrative practices worldwide (Zawacki-Richter et al., 2019). The emergence of generative AI tools has introduced both challenges and opportunities, prompting institutions to adapt rapidly. For instance, 89% of educators have reported that AI profoundly impacts their teaching practices (EDUCAUSE Horizon Report, 2023). Since 2020, the integration of AI in higher education has increased by 300% (Holmes et al., 2022), highlighting the urgent need for systematic approaches to developing AI literacy.
Despite the widespread adoption of AI, structured programs to enhance AI literacy are still quite limited. Although 76% of higher education institutions have embraced AI technologies, only 24% offer formal initiatives to equip stakeholders with essential skills (García-Peñalvo et al., 2022). This gap hinders effective AI integration and worsens challenges related to governance, teaching methods, and institutional readiness (UNESCO, 2022).
Problem Statement
AI literacy is increasingly recognized as essential; however, current development efforts in higher education are often fragmented and reactive. A comprehensive review by Long and Magerko (2020) found that 82% of institutions address AI literacy through isolated initiatives or ad-hoc policies. This approach leads to several issues, including:
Inconsistent learning outcomes
Significant gaps in critical evaluation skills
Challenges in monitoring progress
Poor preparation for emerging AI applications
The consequences of these limitations are evident. A meta-analysis by Chen et al. (2023), which included 145 institutions across 28 countries, revealed a 47% efficiency gap in AI literacy programs due to the lack of structured progression pathways.
Research Gaps
Recent reviews by Zawacki-Richter et al. (2023) and Roll and Wylie (2022) identify three critical gaps in the development of AI literacy:
Stakeholder Exclusivity: Many existing frameworks focus on specific groups, such as students or faculty, without considering the interconnected needs of all institutional roles (Eaton & Turner, 2023).
Progression Pathways: Only 12% of current frameworks provide clear pathways for developing skills from foundational literacy to advanced competencies, which limits their overall effectiveness (Prinsloo & Slade, 2023).
Validated Methodologies: There is a scarcity of empirically validated strategies, with only 8% of frameworks being based on robust methodologies (Hwang & Chen, 2023).
These gaps highlight the need for a systematic, multi-stakeholder framework that addresses existing limitations and the complexities of AI literacy in higher education.
Study Objectives
This study aims to design and validate the AI Literacy Framework (ALiF), a comprehensive model tailored to meet the diverse needs of students, faculty, and staff. By addressing identified research gaps, the study seeks to establish a validated methodology that higher education institutions can adopt to effectively enhance AI literacy.
2. Literature Review
Current State of AI Literacy Frameworks
The evolution of AI literacy frameworks in higher education showcases both innovation and ongoing challenges. Zawacki-Richter et al. (2023) identify three key phases: early tool-focused approaches (2015–2019), competency-based models (2020–2021), and integrated frameworks (2022–present). Despite these advancements, significant gaps persist. Long and Magerko (2020) point out the absence of comprehensive frameworks that address the interconnected needs of diverse institutional stakeholders.
Recent reviews emphasize notable limitations in current frameworks. Chen et al. (2023) observe that most frameworks prioritize technical skills while overlooking ethical considerations and critical evaluation competencies. For example, Hernández-Leo et al. (2023) report that 67% of frameworks focus on tool proficiency, while only 23% adequately address ethical and evaluative dimensions. Furthermore, the lack of clear progression metrics undermines the consistency and measurability of skill development (Prinsloo & Slade, 2023). Trust and Whalen (2023) also criticize the siloed nature of framework development, which limits institutional effectiveness in adopting AI technologies.
Stakeholder-Specific Needs
Effective AI literacy frameworks must address the unique yet connected needs of students, faculty, and administrative staff. These groups need specific competencies for successful AI integration in their roles.
Students
Students must develop foundational competencies to effectively integrate AI into their academic tasks and professional development. Tsai et al. (2023) emphasize the significance of academic integration capabilities, which include critical evaluation skills for research and writing. Holmes et al. (2022) highlight the importance of ethical application literacy as a means of maintaining academic integrity in a learning environment enhanced by AI. Furthermore, career readiness necessitates equipping students with AI-enhanced skills to successfully navigate a technology-driven workforce (Guan et al., 2023).
Faculty
Faculty members encounter distinct challenges in incorporating AI into their teaching, assessment, and research practices. Holmes et al. (2022) highlight the necessity for skills in pedagogical integration, particularly when designing learning experiences that utilize AI. As assessment practices evolve, faculty must be proficient in adjusting traditional evaluation methods to fit AI-supported environments (García-Peñalvo et al., 2023). Additionally, Luckin et al. (2022) stress the importance of research application skills, including data analysis and the adaptation of methodologies, as essential for faculty to achieve AI literacy. Faculty also need training to assist students in understanding the implications of AI in education (Hwang & Chen, 2023).
Administrative Staff
Administrative staff, often overlooked, play a crucial role in the integration of AI within institutions. Guan et al. (2023) emphasize the importance of operational efficiency in AI-enhanced processes, such as data management and workflow optimization. Additionally, staff members need to develop competencies that support both the ethical and technical aspects of AI implementation (Jobin et al., 2023). As institutions establish comprehensive AI governance frameworks, it is essential for staff to gain expertise in policy implementation and risk management, particularly regarding data protection and compliance (UNESCO, 2023; Trust & Whalen, 2023).
Implementation Challenges
Implementing AI literacy frameworks in higher education faces several challenges. Resource limitations often restrict institutions' ability to balance investments in technology with the necessary training for staff (Jobin et al., 2023). Institutional policies that prioritize traditional teaching methods can obstruct the swift adaptation needed for effective AI integration (García-Peñalvo et al., 2023). Additionally, there is a cultural divide among stakeholders; while early adopters are generally eager to embrace AI, those with traditional views often express skepticism (Luckin et al., 2022). Discrepancies in technical infrastructure further impact the scalability and sustainability of implementation efforts, particularly in settings with limited resources (Hwang & Chen, 2023).
Emerging Trends and Future Directions
AI literacy frameworks are increasingly focusing on integration and adaptability. Integrated approaches that align competencies across different stakeholder groups are becoming more prominent, as highlighted by UNESCO (2023). Multi-dimensional assessments, which combine traditional evaluation methods with AI-enhanced techniques, are now essential for tracking long-term skill development (Chen et al., 2023). The incorporation of ethical AI principles is also a growing priority, as it addresses concerns about responsible implementation (Jobin et al., 2023). Additionally, adaptive learning pathways are creating personalized experiences for students, faculty, and staff, which enhances engagement and effectiveness (Tsai et al., 2023). This study builds upon these trends by providing a systematic and validated methodology tailored to various institutional contexts.
International Perspectives on AI Literacy Development
Global efforts to develop AI literacy frameworks show regional differences in approach and focus. In Asia, countries like Singapore and China have integrated national policies with institutional frameworks, placing a strong emphasis on ethical AI alongside technical skills (Tan & Hew, 2022; Zhang et al., 2023). In Africa, particularly in South Africa and Nigeria, there has been innovation within resource constraints, leading to scalable and sustainable frameworks (Mkhize & Msweli, 2023; Olaleye & Sanusi, 2023). Emerging markets such as India and Brazil have also developed scalable frameworks that balance technical skills with societal objectives (Kumar & Sharma, 2023; Santos & Lima, 2023).
Despite rapid technological advancements, there is a significant gap between research and practice in the MENA region. This region faces challenges due to limited documentation of systematic frameworks and a tendency to prioritize rapid implementation over theoretical development (Olaleye & Sanusi, 2023). This study aims to address this gap by providing a validated methodology tailored to the unique educational and cultural context of the MENA region.
3. Methodology
Research Design
This study utilizes a sequential mixed-methods approach to develop and validate the AI Literacy Framework (ALiF). The methodology combines qualitative and quantitative techniques across three phases: developing the framework foundation, refining it, and validating the final product. This design provides a comprehensive understanding of stakeholder needs and results in a robust, adaptable framework that is suitable for various institutional contexts (Chan et al., 2023).
3.2. Phase 1: Framework Foundation Development
The initial phase focuses on establishing the theoretical and practical foundations of the framework using two primary approaches:
Systematic Literature Review: We will conduct a comprehensive review of existing AI literacy frameworks, competency models, and implementation strategies, following the methodology outlined by Zawacki-Richter et al. (2023). This review will cover academic databases such as Web of Science, Scopus, and ERIC, focusing on publications from 2019 to 2024.
Expert Consultation: In line with Kumar and Sharma's (2023) approach to framework development, we will engage an international panel of experts through semi-structured interviews. The expert panel will consist of educators, AI specialists, educational technologists, and policymakers, ensuring a diverse range of perspectives in the framework's development.
3.3. Phase 2: Framework Refinement
The refinement phase utilizes iterative development processes through various channels:
Delphi Study: Following the methodology established by Park and Kim (2022), we will conduct a three-round Delphi study involving 30 international experts. The selection criteria will include a minimum of five years of experience in AI education, published research in the field, and current involvement in AI literacy initiatives.
Stakeholder Focus Groups: In line with the effective approach demonstrated by Mkhize and Msweli (2023), we will organize focus groups with key stakeholders, including students, faculty, and staff. Each group will consist of 8-10 participants from diverse disciplinary backgrounds, ensuring comprehensive feedback on the framework components.
3.4. Phase 3: Framework Validation
The validation phase involves several assessment methods:
Pilot Implementation: Following the implementation model proposed by Tan and Hew (2022), the framework will be piloted at three partner institutions, each representing a different institutional context. The pilot will last for one academic semester, with systematic data collection conducted throughout this period.
Mixed-Methods Assessment: Based on the evaluation approach suggested by Santos and Lima (2023), we will utilize both quantitative and qualitative assessment methods:
Quantitative Assessment: Pre- and post-implementation surveys will measure competency development, implementation effectiveness, and stakeholder satisfaction.
Qualitative Evaluation: We will conduct semi-structured interviews, utilize observation protocols, and carry out document analysis to evaluate the framework's effectiveness and identify any implementation challenges.
3.5. Data Collection
Data collection follows rigorous protocols established in educational research.
Systematic Literature Review Data Collection
The process of conducting a systematic literature review follows a structured protocol adapted from Zawacki-Richter et al. (2023). The search strategy includes five major academic databases: Web of Science, Scopus, ERIC, Medline, IEEE Xplore, and ACM Digital Library. The search terms combine "artificial intelligence" or "AI" with concepts related to literacy, competency, and higher education, as validated by Kumar and Sharma (2023).
The review covers publications from 2019 to 2024, focusing on empirical studies, framework developments, and implementation reports. Following the methodology of Chan et al. (2023), the inclusion criteria require that publications be peer-reviewed and in English, although significant non-English publications with English abstracts will also be considered.
Expert Consultation Data Collection
Expert consultation utilizes a structured interview protocol based on the framework developed by Park and Kim (2022) for educational technology research. A sample of thirty experts was purposefully selected based on specific criteria.
According to the methodology outlined by Tan and Hew (2022), participants must meet the following qualifications: at least five years of experience in AI education, active research or practice in AI literacy development, and current involvement in institutional AI implementation.
Semi-structured interviews, lasting between 60 to 90 minutes, are conducted through video conferencing platforms (Teams/Zoom). The interview protocol covers components of the framework, implementation challenges, and strategies for validation. Following the recommendations of Mkhize and Msweli (2023), the interviews are recorded, transcribed, and validated by the participants.
Delphi Study Data Collection
The Delphi study is structured into three rounds, as outlined by Zhang et al. (2023). In the first round, open-ended questions are used to gather input on the framework components and implementation strategies. The second round presents the consolidated findings for participants to rate and provide comments on. The third round aims to build consensus regarding the elements of the framework.
Data collection takes place through secure online platforms, adhering to standardized response timeframes established by previous research (Olaleye & Sanusi, 2023). After each round, the responses are analyzed using systematic coding procedures, with a focus on developing consensus and addressing differing viewpoints.
Focus Group Data Collection
Focus group sessions adhere to protocols developed by Santos and Lima (2023) specifically for educational technology research. Separate sessions are held for each stakeholder group: students, faculty, and staff. Each session includes 8 to 10 participants and lasts approximately 90 minutes.
The session protocols involve guided discussions on various framework components, implementation challenges, and assessment methods. Following the methodology outlined by Janse van Rensburg and Goede (2022), the sessions are recorded, transcribed, and analyzed using standardized coding procedures.
Pilot Implementation Data Collection
The pilot implementation data collection follows a comprehensive protocol developed by Nwana et al. (2023). The process for collecting quantitative data includes:
Pre-implementation Baseline Assessments: These assess current AI literacy levels using validated survey instruments adapted from Kumar and Sharma (2023).
Regular Progress Monitoring: This involves using standardized assessment tools to track the development of competencies across framework components, as outlined by Chan et al. (2023).
Post-implementation Evaluation: This evaluates the framework's effectiveness, stakeholder satisfaction, and the overall success of the implementation, following the protocols established by Faruqe et al. (2021).
For qualitative data collection during the pilot implementation, the following methods are used:
Implementation Journals: Participating institutions maintain journals that document challenges, adaptations, and successes, following templates developed by Park and Kim (2022).
Regular Observation Sessions: Conducted by trained researchers, these sessions use structured observation protocols adapted from Tan and Hew's (2022) implementation studies.
Periodic Stakeholder Interviews: These interviews explore the experiences of stakeholders during the implementation process and are conducted at key points throughout the pilot phase, following the interview protocols established by Mkhize and Msweli (2023).
3.6. Data Analysis
Systematic Literature Review Analysis
Following the methodology outlined by Zawacki-Richter et al. (2023), the analysis of the literature review data utilizes a structured coding approach:
Initial Screening: The analysis begins with descriptive coding of the identified literature using NVivo software. Papers are categorized based on research type, methodology, geographical context, and framework components according to the classification system proposed by Kumar and Sharma (2023).
Content Analysis: Thematic analysis is conducted using the two-cycle coding process developed by Chan et al. (2023). In the first cycle, key themes and components of AI literacy frameworks are identified. The second cycle synthesizes the relationships between these components and their implementation approaches.
Framework Synthesis: Utilizing the constant comparative method described by Park and Kim (2022), the findings are synthesized into preliminary framework structures, with a specific focus on progression pathways and differentiation among stakeholders.
Expert Consultation Analysis
The analysis of interview data adheres to a rigorous qualitative methodology inspired by Tan and Hew's (2022) approach:
Transcription and Verification: Interviews are professionally transcribed, and participants verify the transcripts for accuracy. NVivo software is used to organize the data and facilitate initial coding.
Thematic Analysis: Following the protocol established by Mkhize and Msweli (2023), the analysis is conducted in three stages:
Open Coding: Identification of key concepts and recommendations.
Axial Coding: Establishing relationships among the identified concepts.
Selective Coding: Developing core theoretical categories.
Inter-Rater Reliability: Two independent researchers code 25% of the data to establish reliability. Any disagreements in coding are resolved through consensus discussions.
Delphi Study Analysis
The analysis of Delphi data combines both quantitative and qualitative approaches, as outlined by Zhang et al. (2023):
Round One: Conduct a qualitative analysis of open-ended responses using thematic analysis techniques. Codes are developed inductively and validated by a team of researchers.
Round Two: Perform a statistical analysis of the ratings, which includes:
- Measures of central tendency
- Consensus calculations using the interquartile range
- Stability analysis between rounds
Round Three: Complete the final analysis by incorporating both statistical consensus measures and qualitative feedback, following the integration protocol established by Olaleye and Sanusi (2023).
Focus Group Analysis
Focus group data analysis follows the systematic approach outlined by Santos and Lima (2023):
Content Analysis: Transcripts are analyzed in detail using NVivo software. Coding schemes are developed inductively from the data and validated against established theoretical frameworks.
Cross-group Analysis: Comparative analyses of different stakeholder groups identify common themes as well as concerns specific to each group.
Validation: Member checking and peer debriefing are conducted to ensure the validity of the analysis.
5. Discussion
5.1. Framework Development Considerations
The AI Literacy Framework (ALiF) protocol has been created to fill significant gaps in current educational research and practice. As highlighted by Zawacki-Richter et al. (2023), the rapid integration of AI technologies in higher education requires systematic approaches to developing literacy. This protocol study provides a structured methodology to address this need, while also acknowledging the complex relationship between technical skills, critical evaluation, and ethical considerations.
The proposed multi-stakeholder approach addresses the limitations identified by Chan et al. (2023) in existing frameworks. By incorporating the perspectives of students, faculty, and staff from the outset, this protocol enables the creation of a more comprehensive and practical framework. This inclusive approach is in line with the findings of Kumar and Sharma (2023), which emphasize the importance of stakeholder engagement when implementing educational technology.
5.2. Methodological Implications
The mixed-methods design outlined in this protocol aims to address the methodological limitations identified in previous attempts to develop frameworks. Park and Kim (2022) emphasize the importance of combining rigorous qualitative research with quantitative validation, a strategy this protocol adopts through its three-phase design. By incorporating expert consultations and stakeholder feedback, this protocol aligns with Tan and Hew's (2022) call for a more balanced approach to framework development.
The iterative development process includes multiple validation stages, which enhance the protocol's reliability. This approach follows Santos and Lima's (2023) recommendations for developing educational frameworks, especially in rapidly changing technological environments. Furthermore, the emphasis on continuous refinement throughout the development process addresses concerns Mkhize and Msweli (2023) raised regarding frameworks' adaptability and sustainability.
5.3. Implementation Challenges
Several potential challenges should be considered when implementing the protocol. According to Janse van Rensburg and Goede (2022), the resource requirements for developing a comprehensive framework may present obstacles for some institutions. To address this issue, the protocol provides scalable implementation options and clear guidelines for resource allocation.
Additionally, institutional resistance to systematic change is a common barrier to educational innovation, as noted by Zhang et al. (2023). This resistance must be taken into account. The protocol emphasizes the importance of stakeholder engagement and suggests a phased implementation approach to help overcome this challenge. Ultimately, the success of these strategies will largely depend on the specific context of the institution and the support from its leadership.
5.4. Cultural and Regional Considerations
The application of the protocol in different cultural contexts requires careful consideration. Research from the MENA region indicates that technological implementation frameworks must consider local educational traditions and values. The protocol's flexible design allows for cultural adaptation while maintaining essential methodological rigor.
Regional differences in technological infrastructure and resource availability—highlighted by Olaleye and Sanusi (2023)—necessitate adaptable implementation strategies. The protocol’s tiered approach to framework development enables institutions to align their implementations with their specific contexts and capabilities.
5.5. Future Directions
This protocol study opens several opportunities for future research and development. As suggested by Nwana et al. (2023), the evolution of the framework will require ongoing investigation into emerging AI technologies and their applications in education. Long-term studies assessing the framework's effectiveness and impact will be crucial for continuous improvement and adaptation.
Future research should explore the framework's applicability across various higher education institutions, disciplines, and cultural contexts. Additionally, it is essential to investigate the framework's impact on student learning outcomes, faculty development, and institutional effectiveness to validate and refine the approach.
5.6. Practical Implications
The protocol's practical implications extend beyond developing frameworks and influence institutional policies, professional development, and educational practices. According to Raman and Satish (2022), systematic approaches to enhancing AI literacy can significantly improve an institution's preparedness for technological change. This protocol is a foundation for institutions to create comprehensive AI literacy strategies tailored to their needs and capabilities.