1. Introduction and Literature Review
The recent dissemination of Artificial Intelligence (AI) to the general public has promoted studies on its application in everyday life. The growing impact of AI on Humanity is unavoidable, and therefore, it is extremely important to understand what it is and what it can do. The set of skills that include the use, application and interaction with AI is currently called “AI literacy”.
The importance of this topic arises, from the outset, in the field of education - AI in education (AIED) – where this technology is having a huge impact on the educational community and on the education system itself. Studying the use of AIED is a search for solutions that can add value to the teaching-learning process, supporting teachers and students, highlighting the human factor, their thinking skills, teamwork and flexibility, management of knowledge, ethics and responsibility [
1].
The present study seeks to assess the level of AI literacy of teachers at Portalegre Polytechnic University (PPU), aiming to identify gaps and find the main opportunities for innovation and development so that the education system can adopt AIED as an ally in promoting higher quality education better prepared for the challenges of the future. As specific objectives, we seek to assess the degree of relationship between the dimensions of AI literacy and identify the predictor factors.
The scientific term, as a science of intelligent machines, according to [
2,
3], dates back to 1956. This was followed during the 1980s by a great development of intellectual skills in machines, as well as the first attempts to replicate the teaching process using AI [
1]. [
3] stated that the entire education system should be reviewed, not only to make it more practical, but also more open to the world of work and to anticipate the transformations in knowledge. AIED began as a field of recreation and research for computer scientists, with a great impact on education [
4], and today fuels the controversy referred to by [
5] regarding the use of AIED and the fear that the machine will replace the teacher. [
6,
7]
The acquisition and development of digital skills are seen as essential tools to facilitate lifelong learning, and are therefore one of the main economic concerns in most developed countries. “Literacy”, the ability to read and write and to perceive and interpret what is read, undergoes an important development when it becomes clear that, despite having the ability to write and read, some people are unable to understand the meaning of what they read. In terms of information and communication technologies (ICT), digital literacy has been studied in depth, but there is still no consensual definition, because the ability to use a computer is currently an insufficient measure to define digital literacy [
8].
When AI is introduced into the concept of digital literacy, the scenario becomes even more complex. According to [
9], AI literacy is more than knowing how to use AI-driven tools, as it involves lower and higher-level thinking skills to understand the knowledge and capabilities behind AI technologies and make work easier. For this author, it will not be possible to adequately understand this technology as long as we insist on considering it only as knowledge and skills, as AI involves attitudes and moral decision-making for the development of AI literacy and its responsible use. According to [
10,
11], AI literacy is composed of different competencies enabling individuals to critically evaluate the use of those kinds of technologies, to communicate and collaborate with AI as well as to use it in different contexts, its objective being to describe the skills necessary for a basic understanding of AI.
In turn, [
12] states that definitions of AI literacy differ in terms of the exact number and configuration of skills it entails, and referring to [
13], indicates that an analysis of conceptualizations of AI literacy in education can be organized into four concepts: (1) knowing and understanding AI, (2) using and applying AI, (3) evaluating and creating AI, and (4) AI ethics. For that author, the vast majority of conceptualizations of AI literacy are parallel to Bloom’s taxonomy in terms of its general configuration of skills. Considering that this taxonomy constitutes the basis of countless formulations of competences in schools and universities, this is of enormous importance and correlation with AIED.
It is still difficult to measure AI literacy. Four published scales are currently used to carry out this measurement, three of which are not school-focused, but can be used for more general measurement purposes. As they are not based on established theoretical models of competences, it makes interpretation of the latent factors of these scales seem arbitrary [
12]. In fact, these authors developed a new measuring instrument based on the existing literature on AI literacy, which is modular, meets psychometric requirements and includes other psychological skills in addition to the classical ones of AI literacy.
Although it is not objectively clear how the development of AI can be applied to education systems, enthusiasm is growing, with excessive optimism regarding the potential to transform current education systems [
14]. [
4] sought to identify potential aspects of threat, excitement and promise of AIED and highlighted the importance of traditional pedagogical values, such as skepticism, continuing to argue that the ultimate goal of education should be to promote responsible citizens and healthy educated minds. Therefore, the adoption of ethical frameworks for the use and development of AIED is extremely important, ensuring that it will be continually discussed and updated in light of the rapid development of AI techniques and their potential for widespread application [
15].
At the same time, a set of questions must be carefully considered and comprehensively addressed as soon as possible: “What will be the future role of the teacher, and other school personnel, in education with AI systems? And how does this align with our beliefs or pedagogical theories? Do educational leaders and teachers have enough knowledge in the field of AI to distinguish a poorly developed system from a good one? Or how to apply them appropriately in the education context? Furthermore, how can we protect student and teacher data when the skills and knowledge to develop AIED systems are in the hands of for-profit organizations and not in the education sector?” In particular, the issue of aligning AI with pedagogical theory must remain on the table, as any new technology integrated into education must be designed to fill a pedagogical need [
4].
The widely reported and recognized need for AI regulation leads to new steps towards this. On October 26, 2023, the Secretary-General of the United Nations (UN), António Guterres, launched a high-level multisectoral advisory body on AI, to identify risks, challenges and main opportunities, while more recently the Spanish presidency of the EU Council announced that the EU co-legislators, the Council and the European Parliament, had reached a provisional agreement on the world’s first rules for AI, advancing the preparation of a regulation aiming to ensure that AI in use in the EU should be safe and respect European rights and values [
16]. In Portugal, the most recent document with official recommendations for the use of AI is a guide for ethical, transparent and responsible Artificial Intelligence in Public Administration, published in July 2022 by the Agency for Administrative Modernization [
17]. This document advances a structuring conceptualization for ethical, responsible and transparent AI, identifies barriers, challenges and dangers, and presents recommendations and a tool for risk assessment. Despite a very complete content, the level of dissemination and the respective scope for an effective contribution to AI literacy in Portuguese public administration are unknown.
Although research into AIED has at its heart the desire to support student learning, experience from other areas of AI suggests that this ethical intention is not, in itself, sufficient [
18,
19,
20,
21]. There is a need to consider issues such as equity, responsibility, transparency, partiality, autonomy, inclusion, and also distinguish between “doing ethical things” and “doing things ethically”, in order to understand and make pedagogical choices that are ethical and take into account the ever-present possibility of unintended consequences [
22,
23,
24]. In this context, it is recognized by [
25] that most AIED researchers do not have the training to deal with emerging ethical issues.
Indeed, [
26] suggests some principles for ethical and reliable AIED that should be considered, namely:
i) Governance and management principle: AIED governance and management must take into account interdisciplinary and multi-stakeholder perspectives, as well as all ethical considerations from relevant domains, including, among others, data ethics, learning analytics ethics, computational ethics, human rights and inclusion;
ii) Principle of transparency of data and algorithms: the process of collecting, analyzing and communicating data must be transparent, with informed consent and clarity about data ownership, accessibility and the objectives of its use;
iii) Accountability principle: AIED regulation must explicitly address recognition and responsibility for the actions of each stakeholder involved in the design and use of systems, including the possibility of auditing, the minimization and communication of negative side effects, trade-offs and compensation;
iv) Principle of sustainability and proportionality: AIED must be designed, developed and used in a way that does not disrupt the environment, the global economy and society, namely the labor market, culture and politics;
v) Privacy principle: AIED must guarantee the user’s informed consent and maintain the confidentiality of user information, both when they provide information and when the system collects information about them;
vi) Security principle: AIED must be designed and implemented to ensure that the solution is robust enough to effectively safeguard and protect data against cybercrime, data breaches and corruption threats, ensuring the privacy and security of sensitive information;
vii) Safety principle: AIED systems must be designed, developed and implemented according to a risk management approach, in order to protect users from unintentional and unexpected harm and reduce the number of serious situations;
viii) Principle of Inclusion in Accessibility: the design, development and implementation of AIED must take into account infrastructure, equipment, skills and social acceptance, allowing equitable access and use of AIED;
ix) Human-Centered AIED Principle: The aim of AIED should be to complement and enhance human cognitive, social and cultural capabilities, while preserving meaningful opportunities for freedom of choice and ensuring human control over AI-based work processes.
The remainder of the paper is organized as follows:
Section 2 presents the materials and methods, in particular the questionnaire;
Section 3 presents the results;
Section 4 discusses the results and concludes the analysis.
2. Materials and Methods
Despite the high number of studies produced on AI literacy to date, its measurement is still complex. The difficulties of conceptualization and the fact that many articles on the subject originate in an educational context, limit the development of measurement scales and therefore their adoption in different contexts.
[
12] developed a measuring instrument that builds on existing literature on AI literacy. It is modular (including distinct facets that can be used independently), easily applicable to professional life, meets psychometric requirements, and includes other psychological skills besides the classic facets of AI literacy, having been tested for its factorial structure. Therefore, the questionnaire by [
12] was applied in this study, adapted to the Portuguese language. It consists of 29 questions, based on three dimensions - AI Literacy, AI Self-Efficacy and AI Self-Management - measured using a 5-point Likert scale (from 1 = “totally disagree” to 5 = “completely agree”).
In the first dimension (AI Literacy), using and applying AI, according to [
13], means applying knowledge, concepts and applications of AI in different scenarios and implies understanding the applications of AI and how it can affect one’s life. In turn, the knowing and understanding AI factor means knowing the basic functions of AI and knowing how to use its applications, covering the acquisition of fundamental concepts, skills, knowledge and attitudes that do not require prior knowledge, as well as understanding the technologies underlying techniques and basic concepts underlying AI in different products and services. The AI Ethics factor, advanced by the same author, means human-centered considerations (for example, equity, responsibility, transparency, ethics and security), therefore incorporating knowledge of ethical issues relating to AI technologies. Still in this dimension, but by the authors [
10,
27], it means distinguishing between technological equipment that uses and does not use AI.
The second dimension (AI Self-efficacy) integrates the Problem Resolution factor. According to [
28], this means voluntary behavior aimed at solving problems, based on belief in the advantages of behavioral success, external approval and level of control of internal and external factors. The Learning factor, according to [
29], means understanding how AI learns and can be affected by data, that is, having a basic understanding of how AI and machine learning work, as well as knowledge of the implications of data quality, feedback, and one’s own data of interaction. Still on this factor,[
30] integrates skills that allow the development of adaptive knowledge to make self-learning and technological evolution profitable, with [
31] including the level of availability for AI.
The third and final dimension (AI Self-Management) integrates the AI Persuasion Literacy factor which, according to [
29], means understanding how the human-like characteristics of AI systems can unconsciously manipulate users’ perceptions and behaviors and thus thwart attempts to influence them. According to the same author, the Emotion Regulation factor means the constructive management of negative emotions (such as frustration and anxiety) when interacting with AI systems.
The present sample was made up of 75 teachers from the various schools of PPU, from a total of 275 teachers, corresponding to one third of the population. The respondents are aged from 25 to over 50, with 70.7% being 45 or over. 41 participants were female (54.7%), 33 participants were male (44.0%) and 1 chose not to specify (1.3%). A major proportion of participants came from the School of Technology, Management and Design (40.0%), followed by the School of Health (29.3%), the School of Education and Sciences (22.7%) and the School of Biosciences of Elvas (8.0%). It is noteworthy that most participants teach in more than one study cycle, with 25.3% teaching on higher technical courses and bachelor’s degrees, 22.7% teaching bachelor’s and master’s degrees, 21.3% teaching only bachelor’s degrees, while 20.0% teach in the three study cycles (higher technical courses, bachelor’s and master’s degree). The main areas of basic training for participants are health (29.3%), social and behavioral sciences (13.3%) and business sciences (12.0%).
The instrument used in the present study designed by [
12] covers the three dimensions, based on existing literature (AI Literacy, AI Self-Efficacy and AI Self-Management), each containing more than one descriptor factor. The whole questionnaire can be consulted in
Table A1.
The results of applying the questionnaire, presented in
Table 1, reveal an average level of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4. The AI Literacy dimension recorded the highest average level of response (3.56), highlighting that the factor of Using and applying AI was the one with the highest average level of response (3.85), followed by the Ethics of AI factor with an average level of 3.73. Still in this dimension, the Knowing and understanding AI factor obtained an average response level of 3.46 and Detecting AI a level of 3.20.
In turn, the AI Self-Efficacy dimension obtained the lowest average response level (2.86), highlighting that Learning was the factor with the lowest average response level (2.49). Specifically in this factor, 65.4% of respondents responded with levels 2 and 3.
Within the AI Self-Management dimension, which obtained an average response level of 3.41, the AI Persuasion Literacy factor obtained an average response of 3.27 and Emotion Regulation an average of 3.55.
A more detailed analysis of the means and standard deviation of each question can be analyzed in
Appendix B (
Table A2), with the respective mean values, ranging from 2.40 (“Despite the rapid changes in the field of artificial intelligence, I can always keep up to date”) to 4.25 (“I can operate AI applications in everyday life.”).
4. Discussion and Conclusions
The objective of this study was to assess the AI literacy of PPU teachers, in order to identify gaps and find the main opportunities for innovation and development, specifically seeking to assess the degree of relationship between the dimensions of AI literacy and identify what could be the predictive variables in this matter.
The results of the questionnaire revealed an overall average level of AI literacy in the sample, meaning it will be desirable to implement strategies to develop faculty skills in AI matters given the growing impact of AIED on the education community and the education system itself. A higher level of AI literacy will allow us to find and implement better solutions to add value to the teaching-learning process through AI technologies and simultaneously support teachers and students.
The correlation between the three dimensions studied allows us to conclude that the AI Literacy dimension is the biggest predictor of the level of AI literacy in the sample which, integrating the factors of use and application, knowledge and understanding, detection and ethics, suggests specific development of these skills to increase the participants’ overall level of literacy. However, the below-average result of the Learning factor, incorporating understanding of the functioning of AI technologies, the development of adaptive knowledge and the level of availability for AI, indicates the pressing need to focus on developing these skills through awareness-raising policies and targeted training actions.
From the study of correlations, it was concluded that no factor characterizing the sample (age, gender, study cycle taught or area of training) is a predictor. Therefore, these factors do not explain the sample’s level of literacy, nor do they determine or limit it.
Highlighted is the higher level of application of knowledge, concepts, and applications of AI in different scenarios and awareness of ethical issues relating to AI technologies such as equity, responsibility, transparency, ethics and safety of teachers, given the recent emergence of public and widespread use of AI applications by the education community.
The sample size preventing generalizations and the fact that it is not possible to make comparisons of application of the same measuring instrument are the main limitations identified. This leads to suggesting future work applying the instrument to the student body at PPU and expanding similar studies to Portuguese polytechnic higher education, looking for possible predictors in a broader educational community and identifying intervention priorities to increase AI literacy in academia in Portugal.