Preprint
Article

Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University

Altmetrics

Downloads

187

Views

149

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

12 March 2024

Posted:

13 March 2024

You are already at the latest version

Alerts
Abstract
The growing impact of Artificial Intelligence (AI) on Humanity is unavoidable, and therefore, “AI literacy” is extremely important. In the field of education – AI in education (AIED) – this technology is having a huge impact on the educational community and on the education system itself. The present study seeks to assess the level of AI literacy of teachers at Portalegre Polytechnic University (PPU), aiming to identify gaps, find the main opportunities for innovation and development and seek the degree of relationship between the dimensions of AI literacy, as well as identifying the predictive variables in this matter. As a measuring instrument, a validated questionnaire based on three dimensions (AI Literacy, AI Self-Efficacy and AI Self-Management) was applied to a sample of 75 teachers in the various schools of PPU. This revealed an average level of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4 (based on a Likert scale from 1 to 5). The results also demonstrate that the first dimension is highly significant for the total dimensions, i.e., for AI Literacy, and no factor characterizing the sample is a predictor, but finding a below-average result in the Learning factor indicates a pressing need to focus on developing these skills.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction and Literature Review

The recent dissemination of Artificial Intelligence (AI) to the general public has promoted studies on its application in everyday life. The growing impact of AI on Humanity is unavoidable, and therefore, it is extremely important to understand what it is and what it can do. The set of skills that include the use, application and interaction with AI is currently called “AI literacy”.
The importance of this topic arises, from the outset, in the field of education - AI in education (AIED) – where this technology is having a huge impact on the educational community and on the education system itself. Studying the use of AIED is a search for solutions that can add value to the teaching-learning process, supporting teachers and students, highlighting the human factor, their thinking skills, teamwork and flexibility, management of knowledge, ethics and responsibility [1].
The present study seeks to assess the level of AI literacy of teachers at Portalegre Polytechnic University (PPU), aiming to identify gaps and find the main opportunities for innovation and development so that the education system can adopt AIED as an ally in promoting higher quality education better prepared for the challenges of the future. As specific objectives, we seek to assess the degree of relationship between the dimensions of AI literacy and identify the predictor factors.
The scientific term, as a science of intelligent machines, according to [2,3], dates back to 1956. This was followed during the 1980s by a great development of intellectual skills in machines, as well as the first attempts to replicate the teaching process using AI [1]. [3] stated that the entire education system should be reviewed, not only to make it more practical, but also more open to the world of work and to anticipate the transformations in knowledge. AIED began as a field of recreation and research for computer scientists, with a great impact on education [4], and today fuels the controversy referred to by [5] regarding the use of AIED and the fear that the machine will replace the teacher. [6,7]
The acquisition and development of digital skills are seen as essential tools to facilitate lifelong learning, and are therefore one of the main economic concerns in most developed countries. “Literacy”, the ability to read and write and to perceive and interpret what is read, undergoes an important development when it becomes clear that, despite having the ability to write and read, some people are unable to understand the meaning of what they read. In terms of information and communication technologies (ICT), digital literacy has been studied in depth, but there is still no consensual definition, because the ability to use a computer is currently an insufficient measure to define digital literacy [8].
When AI is introduced into the concept of digital literacy, the scenario becomes even more complex. According to [9], AI literacy is more than knowing how to use AI-driven tools, as it involves lower and higher-level thinking skills to understand the knowledge and capabilities behind AI technologies and make work easier. For this author, it will not be possible to adequately understand this technology as long as we insist on considering it only as knowledge and skills, as AI involves attitudes and moral decision-making for the development of AI literacy and its responsible use. According to [10,11], AI literacy is composed of different competencies enabling individuals to critically evaluate the use of those kinds of technologies, to communicate and collaborate with AI as well as to use it in different contexts, its objective being to describe the skills necessary for a basic understanding of AI.
In turn, [12] states that definitions of AI literacy differ in terms of the exact number and configuration of skills it entails, and referring to [13], indicates that an analysis of conceptualizations of AI literacy in education can be organized into four concepts: (1) knowing and understanding AI, (2) using and applying AI, (3) evaluating and creating AI, and (4) AI ethics. For that author, the vast majority of conceptualizations of AI literacy are parallel to Bloom’s taxonomy in terms of its general configuration of skills. Considering that this taxonomy constitutes the basis of countless formulations of competences in schools and universities, this is of enormous importance and correlation with AIED.
It is still difficult to measure AI literacy. Four published scales are currently used to carry out this measurement, three of which are not school-focused, but can be used for more general measurement purposes. As they are not based on established theoretical models of competences, it makes interpretation of the latent factors of these scales seem arbitrary [12]. In fact, these authors developed a new measuring instrument based on the existing literature on AI literacy, which is modular, meets psychometric requirements and includes other psychological skills in addition to the classical ones of AI literacy.
Although it is not objectively clear how the development of AI can be applied to education systems, enthusiasm is growing, with excessive optimism regarding the potential to transform current education systems [14]. [4] sought to identify potential aspects of threat, excitement and promise of AIED and highlighted the importance of traditional pedagogical values, such as skepticism, continuing to argue that the ultimate goal of education should be to promote responsible citizens and healthy educated minds. Therefore, the adoption of ethical frameworks for the use and development of AIED is extremely important, ensuring that it will be continually discussed and updated in light of the rapid development of AI techniques and their potential for widespread application [15].
At the same time, a set of questions must be carefully considered and comprehensively addressed as soon as possible: “What will be the future role of the teacher, and other school personnel, in education with AI systems? And how does this align with our beliefs or pedagogical theories? Do educational leaders and teachers have enough knowledge in the field of AI to distinguish a poorly developed system from a good one? Or how to apply them appropriately in the education context? Furthermore, how can we protect student and teacher data when the skills and knowledge to develop AIED systems are in the hands of for-profit organizations and not in the education sector?” In particular, the issue of aligning AI with pedagogical theory must remain on the table, as any new technology integrated into education must be designed to fill a pedagogical need [4].
The widely reported and recognized need for AI regulation leads to new steps towards this. On October 26, 2023, the Secretary-General of the United Nations (UN), António Guterres, launched a high-level multisectoral advisory body on AI, to identify risks, challenges and main opportunities, while more recently the Spanish presidency of the EU Council announced that the EU co-legislators, the Council and the European Parliament, had reached a provisional agreement on the world’s first rules for AI, advancing the preparation of a regulation aiming to ensure that AI in use in the EU should be safe and respect European rights and values [16]. In Portugal, the most recent document with official recommendations for the use of AI is a guide for ethical, transparent and responsible Artificial Intelligence in Public Administration, published in July 2022 by the Agency for Administrative Modernization [17]. This document advances a structuring conceptualization for ethical, responsible and transparent AI, identifies barriers, challenges and dangers, and presents recommendations and a tool for risk assessment. Despite a very complete content, the level of dissemination and the respective scope for an effective contribution to AI literacy in Portuguese public administration are unknown.
Although research into AIED has at its heart the desire to support student learning, experience from other areas of AI suggests that this ethical intention is not, in itself, sufficient [18,19,20,21]. There is a need to consider issues such as equity, responsibility, transparency, partiality, autonomy, inclusion, and also distinguish between “doing ethical things” and “doing things ethically”, in order to understand and make pedagogical choices that are ethical and take into account the ever-present possibility of unintended consequences [22,23,24]. In this context, it is recognized by [25] that most AIED researchers do not have the training to deal with emerging ethical issues.
Indeed, [26] suggests some principles for ethical and reliable AIED that should be considered, namely:
i) Governance and management principle: AIED governance and management must take into account interdisciplinary and multi-stakeholder perspectives, as well as all ethical considerations from relevant domains, including, among others, data ethics, learning analytics ethics, computational ethics, human rights and inclusion;
ii) Principle of transparency of data and algorithms: the process of collecting, analyzing and communicating data must be transparent, with informed consent and clarity about data ownership, accessibility and the objectives of its use;
iii) Accountability principle: AIED regulation must explicitly address recognition and responsibility for the actions of each stakeholder involved in the design and use of systems, including the possibility of auditing, the minimization and communication of negative side effects, trade-offs and compensation;
iv) Principle of sustainability and proportionality: AIED must be designed, developed and used in a way that does not disrupt the environment, the global economy and society, namely the labor market, culture and politics;
v) Privacy principle: AIED must guarantee the user’s informed consent and maintain the confidentiality of user information, both when they provide information and when the system collects information about them;
vi) Security principle: AIED must be designed and implemented to ensure that the solution is robust enough to effectively safeguard and protect data against cybercrime, data breaches and corruption threats, ensuring the privacy and security of sensitive information;
vii) Safety principle: AIED systems must be designed, developed and implemented according to a risk management approach, in order to protect users from unintentional and unexpected harm and reduce the number of serious situations;
viii) Principle of Inclusion in Accessibility: the design, development and implementation of AIED must take into account infrastructure, equipment, skills and social acceptance, allowing equitable access and use of AIED;
ix) Human-Centered AIED Principle: The aim of AIED should be to complement and enhance human cognitive, social and cultural capabilities, while preserving meaningful opportunities for freedom of choice and ensuring human control over AI-based work processes.
The remainder of the paper is organized as follows: Section 2 presents the materials and methods, in particular the questionnaire; Section 3 presents the results; Section 4 discusses the results and concludes the analysis.

2. Materials and Methods

Despite the high number of studies produced on AI literacy to date, its measurement is still complex. The difficulties of conceptualization and the fact that many articles on the subject originate in an educational context, limit the development of measurement scales and therefore their adoption in different contexts.
[12] developed a measuring instrument that builds on existing literature on AI literacy. It is modular (including distinct facets that can be used independently), easily applicable to professional life, meets psychometric requirements, and includes other psychological skills besides the classic facets of AI literacy, having been tested for its factorial structure. Therefore, the questionnaire by [12] was applied in this study, adapted to the Portuguese language. It consists of 29 questions, based on three dimensions - AI Literacy, AI Self-Efficacy and AI Self-Management - measured using a 5-point Likert scale (from 1 = “totally disagree” to 5 = “completely agree”).
In the first dimension (AI Literacy), using and applying AI, according to [13], means applying knowledge, concepts and applications of AI in different scenarios and implies understanding the applications of AI and how it can affect one’s life. In turn, the knowing and understanding AI factor means knowing the basic functions of AI and knowing how to use its applications, covering the acquisition of fundamental concepts, skills, knowledge and attitudes that do not require prior knowledge, as well as understanding the technologies underlying techniques and basic concepts underlying AI in different products and services. The AI Ethics factor, advanced by the same author, means human-centered considerations (for example, equity, responsibility, transparency, ethics and security), therefore incorporating knowledge of ethical issues relating to AI technologies. Still in this dimension, but by the authors [10,27], it means distinguishing between technological equipment that uses and does not use AI.
The second dimension (AI Self-efficacy) integrates the Problem Resolution factor. According to [28], this means voluntary behavior aimed at solving problems, based on belief in the advantages of behavioral success, external approval and level of control of internal and external factors. The Learning factor, according to [29], means understanding how AI learns and can be affected by data, that is, having a basic understanding of how AI and machine learning work, as well as knowledge of the implications of data quality, feedback, and one’s own data of interaction. Still on this factor,[30] integrates skills that allow the development of adaptive knowledge to make self-learning and technological evolution profitable, with [31] including the level of availability for AI.
The third and final dimension (AI Self-Management) integrates the AI Persuasion Literacy factor which, according to [29], means understanding how the human-like characteristics of AI systems can unconsciously manipulate users’ perceptions and behaviors and thus thwart attempts to influence them. According to the same author, the Emotion Regulation factor means the constructive management of negative emotions (such as frustration and anxiety) when interacting with AI systems.
The present sample was made up of 75 teachers from the various schools of PPU, from a total of 275 teachers, corresponding to one third of the population. The respondents are aged from 25 to over 50, with 70.7% being 45 or over. 41 participants were female (54.7%), 33 participants were male (44.0%) and 1 chose not to specify (1.3%). A major proportion of participants came from the School of Technology, Management and Design (40.0%), followed by the School of Health (29.3%), the School of Education and Sciences (22.7%) and the School of Biosciences of Elvas (8.0%). It is noteworthy that most participants teach in more than one study cycle, with 25.3% teaching on higher technical courses and bachelor’s degrees, 22.7% teaching bachelor’s and master’s degrees, 21.3% teaching only bachelor’s degrees, while 20.0% teach in the three study cycles (higher technical courses, bachelor’s and master’s degree). The main areas of basic training for participants are health (29.3%), social and behavioral sciences (13.3%) and business sciences (12.0%).
The instrument used in the present study designed by [12] covers the three dimensions, based on existing literature (AI Literacy, AI Self-Efficacy and AI Self-Management), each containing more than one descriptor factor. The whole questionnaire can be consulted in Table A1.
The results of applying the questionnaire, presented in Table 1, reveal an average level of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4. The AI Literacy dimension recorded the highest average level of response (3.56), highlighting that the factor of Using and applying AI was the one with the highest average level of response (3.85), followed by the Ethics of AI factor with an average level of 3.73. Still in this dimension, the Knowing and understanding AI factor obtained an average response level of 3.46 and Detecting AI a level of 3.20.
In turn, the AI Self-Efficacy dimension obtained the lowest average response level (2.86), highlighting that Learning was the factor with the lowest average response level (2.49). Specifically in this factor, 65.4% of respondents responded with levels 2 and 3.
Within the AI Self-Management dimension, which obtained an average response level of 3.41, the AI Persuasion Literacy factor obtained an average response of 3.27 and Emotion Regulation an average of 3.55.
A more detailed analysis of the means and standard deviation of each question can be analyzed in Appendix B (Table A2), with the respective mean values, ranging from 2.40 (“Despite the rapid changes in the field of artificial intelligence, I can always keep up to date”) to 4.25 (“I can operate AI applications in everyday life.”).

3. Results

We started our analysis by calculating the internal consistency of the instrument, obtaining a value of 0.930 for Cronbach’s Alpha. The correlation coefficients between each question and the total suggest good internal validity indices that exceed the critical index (<.20), including the items with the lowest value, highlighting that the vast majority of items have a correlation greater than 0.35, with some items reaching an index above 0.5 (see Table 2). It is noteworthy that in the field of AI Self-Management, two factors related to AI persuasion literacy have a correlation value lower than 0.3. Even so, it was decided to keep them as their elimination would not make significant improvements to the result of the instrument and because, above all, the aim is to maintain the theoretical framework chosen for the objective of the study, that is, to assess the sample’s level of AI literacy.
To study the construct validity, the principal component factor analysis (PCFA) method with varimax rotation was chosen. Following the procedure, a Keyser-Meyer-Olkin measure of .812 was obtained after rotation, which reflects a reasonable variance of the factors [32]. Bartlett’s test of sphericity is associated with a chi-square of 1783.012. Factor extraction followed the method advocated by [33], which consists of reading the screeplot graph.
After analysis, reading of the scree plot graph suggested the existence of three factors. So, considering three factors, the results were relatively aligned with the reference, with those factors explaining 58.6% of the variance found (the first factor explained 36.09%, the second 16.20% and the third 6.31%).
Table 3. Component factor analysis with varimax rotation method.
Table 3. Component factor analysis with varimax rotation method.
Component
Original dimension Item 1 2 3
AI Literacy Use and apply AI_1 0.407 0.807 -0.08
Use and apply AI_2 0.414 0.83 -0.055
Use and apply AI_3 0.509 0.758 0.082
Use and apply AI_4 0.471 0.802 0.03
Use and apply AI_5 0.431 0.787 0.001
Use and apply AI_6 0.339 0.425 0.131
Know and understand AI_1 0.721 -0.186 -0.052
Know and understand AI_2 0.787 -0.096 -0.015
Know and understand AI_3 0.733 -0.329 -0.168
Know and understand AI_4 0.725 0.031 0.013
Know and understand AI_5 0.684 0.037 0.012
Detect AI_1 0.693 -0.078 0.252
Detect AI_2 0.742 -0.222 0.236
Detect AI_3 0.484 -0.221 0.297
AI Ethics_1 0.523 -0.506 -0.186
AI Ethics_2 0.666 -0.157 -0.461
AI Ethics_3 0.676 -0.159 -0.407
AI Self-Efficacy Problem Solving_1 0.764 -0.098 -0.26
Problem Solving_2 0.667 -0.096 -0.402
Problem Solving_3 0.75 0.158 -0.067
Learning_1 0.814 -0.057 0.02
Learning_2 0.731 -0.258 0.034
Learning_3 0.774 -0.061 0.006
AI Self-Management AI Persuasion Literacy_1 0.318 -0.549 -0.07
AI Persuasion Literacy_2 0.288 -0.54 0.306
AI Persuasion Literacy_3 0.431 0.023 0.286
Emotion Regulation_1 0.332 -0.241 0.365
Emotion Regulation_2 0.388 -0.077 0.621
Emotion Regulation _3 0.47 0.043 0.457
In terms of correlations between dimensions, as presented in Table 4, there was a very high level of correlation resulting from the three-factor factor analysis, with correlation values between 0.501 and 0.766 between dimensions and values from 0.636 to 0.949 between the total dimensions and each of them. The results demonstrate that the first dimension is highly significant for the total dimensions, i.e., for AI Literacy in the sample.

4. Discussion and Conclusions

The objective of this study was to assess the AI literacy of PPU teachers, in order to identify gaps and find the main opportunities for innovation and development, specifically seeking to assess the degree of relationship between the dimensions of AI literacy and identify what could be the predictive variables in this matter.
The results of the questionnaire revealed an overall average level of AI literacy in the sample, meaning it will be desirable to implement strategies to develop faculty skills in AI matters given the growing impact of AIED on the education community and the education system itself. A higher level of AI literacy will allow us to find and implement better solutions to add value to the teaching-learning process through AI technologies and simultaneously support teachers and students.
The correlation between the three dimensions studied allows us to conclude that the AI Literacy dimension is the biggest predictor of the level of AI literacy in the sample which, integrating the factors of use and application, knowledge and understanding, detection and ethics, suggests specific development of these skills to increase the participants’ overall level of literacy. However, the below-average result of the Learning factor, incorporating understanding of the functioning of AI technologies, the development of adaptive knowledge and the level of availability for AI, indicates the pressing need to focus on developing these skills through awareness-raising policies and targeted training actions.
From the study of correlations, it was concluded that no factor characterizing the sample (age, gender, study cycle taught or area of training) is a predictor. Therefore, these factors do not explain the sample’s level of literacy, nor do they determine or limit it.
Highlighted is the higher level of application of knowledge, concepts, and applications of AI in different scenarios and awareness of ethical issues relating to AI technologies such as equity, responsibility, transparency, ethics and safety of teachers, given the recent emergence of public and widespread use of AI applications by the education community.
The sample size preventing generalizations and the fact that it is not possible to make comparisons of application of the same measuring instrument are the main limitations identified. This leads to suggesting future work applying the instrument to the student body at PPU and expanding similar studies to Portuguese polytechnic higher education, looking for possible predictors in a broader educational community and identifying intervention priorities to increase AI literacy in academia in Portugal.

Author Contributions

Conceptualization, E.L., C.G., and P.F.; methodology, E.L., C.G., and P.F.; validation, E.L., C.G., and P.F.; formal analysis, E.L., C.G., and P.F.; data curation, E.L., C.G., and P.F.; writing—original draft preparation, E.L., C.G., and P.F.; writing—review and editing, E.L., C.G., and P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundação para a Ciência e a Tecnologia (grant UIDB/05064/2020).

Institutional Review Board Statement

Not applicable

Informed Consent Statement

Not applicable.

Data Availability Statement

The data will be supplied on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Questionnaire presentation.
Table A1. Questionnaire presentation.
Item Source Number Question
AI Literacy
Use and apply AI [13] 1 I can operate AI applications in everyday life.
2 I can use AI applications to make my everyday life easier.
3 I can use artificial intelligence meaningfully to achieve my everyday goals.
4 In everyday life, I can interact with AI in a way that makes my tasks easier.
5 In everyday life, I can work together gainfully with artificial intelligence.
6 I can communicate gainfully with artificial intelligence in everyday life.
Know and understand AI [13] 7 I know the most important concepts of the topic “artificial intelligence”.
8 I know definitions of artificial intelligence.
9 I can assess what the limitations and opportunities of using AI are.
10 I can think of new uses for AI.
11 I can imagine possible future uses of AI.
Detect AI [10,27] 12 I can tell if I am dealing with an application based on artificial intelligence.
13 I can distinguish devices that use AI from devices that do not.
14 I can distinguish if I interact with AI or a “real human”.
AI Ethics [13] 15 I can weigh up the consequences of using AI for society.
16 I can incorporate ethical considerations when deciding whether to use data provided by AI.
17 I can analyze AI-based applications for their ethical implications.
AI Auto-efficacy
Problem Solving [28] 18 I can rely on my skills in difficult situations when using AI.
19 I can handle most problems in dealing with artificial intelligence well on my own.
20 I can also usually solve strenuous and complicated tasks when working with artificial intelligence well.
Learning [29,30,31] 21 I can keep up with the latest innovations in AI applications.
22 Despite the rapid changes in the field of artificial intelligence, I can always keep up to date.
23 Although there are often new AI applications, I manage to always be “up-to date”.
AI Self-Management
AI Persuasion Literacy [29] 24 I don’t let AI influence me in my everyday decisions.
25 I can prevent AI from influencing me in my everyday decisions.
26 I realise it if artificial intelligence is influencing me in my everyday decisions.
Emotion Regulation [29] 27 I keep control over feelings like frustration and anxiety while doing everyday things with AI.
28 I can handle it when everyday interactions with AI frustrate or frighten me.
29 I can control my euphoria that arises when I use artificial intelligence for everyday purposes.

Appendix B

Table A2. Mean and standard deviation for each question.
Table A2. Mean and standard deviation for each question.
Original dimension Item Mean Std. Dev.
AI Literacy Use and apply AI_1 4.25 0.931
Use and apply AI_2 4.21 0.920
Use and apply AI_3 3.80 1.090
Use and apply AI_4 4.05 0.985
Use and apply AI_5 3.88 0.999
Use and apply AI_6 2.92 1.171
Know and understand AI_1 3.29 1.037
Know and understand AI_2 3.44 1.030
Know and understand AI_3 3.44 1.017
Know and understand AI_4 3.41 1.079
Know and understand AI_5 3.69 0.986
Detect AI_1 3.27 1.031
Detect AI_2 3.04 1.045
Detect AI_3 3.28 0.966
AI Ethics_1 3.57 1.068
AI Ethics_2 3.85 1.062
AI Ethics_3 3.77 1.073
AI Self-Efficacy Problem Solving_1 3.48 1.070
Problem Solving_2 3.12 1.078
Problem Solving_3 3.09 0.989
Learning_1 2.64 1.048
Learning_2 2.40 1.053
Learning_3 2.43 1.055
AI Self-Competency AI Persuasion Literacy_1 3.49 1.018
AI Persuasion Literacy_2 3.32 1.016
AI Persuasion Literacy_3 3.00 1.053
Emotion Regulation_1 3.39 1.126
Emotion Regulation_2 3.52 0.964
Emotion Regulation _3 3.73 0.991

References

  1. Bates, A.W. Educar na era digital: design, ensino e aprendizagem. in Tecnologia Educacional. Artesanato Educacional, 2017.
  2. Ergen, M. What is Artificial Intelligence? Technical Considerations and Future Perception. Anatol. J. Cardiol. 2019, 22, 5–7. [CrossRef]
  3. Ganascia, J.-G. A Inteligência Artificial. in Biblioteca Básica da Ciência e Cultura. Flammarion: Instituto Piaget, 1993.
  4. Humble, N.; Mozelius, P. The threat, hype, and promise of artificial intelligence in education. Discov. Artif. Intell. 2022, 2, 1–13. [CrossRef]
  5. Tavares, L.A.; Meira, M.C.; Amaral, S.F.D. Inteligência Artificial na Educação: Survey. Braz. J. Dev. 2020, 6, 48699–48714. [CrossRef]
  6. Oliveira, L.; Pinto, M. A Inteligência Artificial na Educação - Ameaças e oportunidades para o processo ensino-aprendizagem, 2023. [Online]. Available: http://hdl.handle.net/10400.22/22779.
  7. Ayed, I.A.H. Oman higher education institutions dealing with artificial intelligence. BUM - Teses de Doutoramento CIEd - Teses de Doutoramento em Educação / PhD Theses in Education. 2022. [Online]. Available: https://hdl.handle.net/1822/76188.
  8. Miranda, P.; Isaias, P.; Pifano, S. Digital Literacy in Higher Education: A Survey on Students’ Self-assessment, in Learning and Collaboration Technologies. Learning and Teaching, vol. 10925, P. Zaphiris and A. Ioannou, Eds., in Lecture Notes in Computer Science, vol. 10925. Cham: Springer International Publishing, 2018, pp. 71–87. [CrossRef]
  9. Kit Ng, D.T.; Wu, W.; Lok Leung, J.K.; Wah Chu, S.K. Artificial Intelligence (AI) Literacy Questionnaire with Confirmatory Factor Analysis. IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA: IEEE, 2023, 233–235. [CrossRef]
  10. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu HI USA: ACM, Apr. 2020, pp. 1–16. [CrossRef]
  11. Hornberger, M.; Bewersdorff, A.; Nerdel, C. What do university students know about Artificial Intelligence? Development and validation of an AI literacy test. Comput. Educ. Artif. Intell. 2023, 5, 100165. [CrossRef]
  12. Carolus, A.; Koch, M.J.; Straka, S.; Latoschik, M.E.; Wienrich, C. MAILS - Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Comput. Hum. Behav. Artif. Humans 2023, 1, 100014. [CrossRef]
  13. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [CrossRef]
  14. Holmes, W.; Persson, J.; Chounta, I.-A.; Wasson, B.; Dimitrova, V. Artificial intelligence and education: a critical view through the lens of human rights, democracy and the rule of law. Strasbourg: Council of Europe, 2022.
  15. Birks, D.; Clare, J. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks. Int. J. Educ. Integr. 2023, 19, 20. [CrossRef]
  16. C. on A. I. Council of Europe, ‘Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and The Rule of Law’. Council of Europe, Dec. 18, 2023.
  17. AMA, ‘Guia para uma Inteligência Artificial ética, transparente e responsável na AP’. Accessed: Nov. 02, 2023. [Online]. Available: https://www.sgeconomia.gov.pt/destaques/amactic-guia-para-uma-inteligencia-artificial-etica-transparente-e-responsavel-na-ap.aspx.
  18. Boulay, B. The overlapping ethical imperatives of human teachers and their Artificially Intelligent assistants, in The Ethics of Artificial Intelligence in Education, 1st ed., New York: Routledge, 2022, pp. 240–254. [CrossRef]
  19. Boulay, B. Artificial Intelligence in Education and Ethics, in Handbook of Open, Distance and Digital Education, O. Zawacki-Richter and I. Jung, Eds., Singapore: Springer Nature Singapore, 2023, pp. 93–108. [CrossRef]
  20. Flores-Vivar, J.-M.; García-Peñalvo, F.-J. Reflections on the ethics, potential, and challenges of artificial intelligence in the framework of quality education (SDG4). Comunicar: Revista Científica de Comunicación y Educación 2023, 31, 37–47. [CrossRef]
  21. Yildiz, Y. Ethics in education and the ethical dimensions of the teaching profession. SR 2022, 4, 38–45. [CrossRef]
  22. Eaton, S.E. Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. Int. J. Educ. Integr. 2023, 19, 23. [CrossRef]
  23. Howley, I.; Mir, D.; Peck, E. Integrating AI ethics across the computing curriculum, in The Ethics of Artificial Intelligence in Education, 1st ed., New York: Routledge, 2022, pp. 255–270. [CrossRef]
  24. Remian, D. Augmenting Education: Ethical Considerations for Incorporating Artificial Intelligence in Education. Instructional Design Capstones Collection 2019, 52, Available: https://scholarworks.umb.edu/instruction_capstone/52.
  25. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Santos, O.C.; Rodrigo, M.T.; Cukurova, M.; Bittencourt, I.I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [CrossRef]
  26. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2023, 28, 4221–4241. [CrossRef]
  27. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [CrossRef]
  28. Ajzen, I. From Intentions to Actions: A Theory of Planned Behavior. Springer-Verlag, 1985.
  29. Carolus, A.; Augustin, Y.; Markus, A.; Wienrich, C. Digital interaction literacy model – Conceptualizing competencies for literate interactions with voice-based AI systems. Comput. Educ. Artif. Intell. 2023, 4, 100114. [CrossRef]
  30. Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI Literacy of Employees at Digital Workplaces. IEEE Trans. Eng. Manag. 2024, 71, 810–823. [CrossRef]
  31. Dai, Y.; Chai, C.-S.; Lin, P.-Y.; Jong, M.S.-Y.; Guo, Y.; Qin, J. Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age. Sustainability 2020, 12, 6597. [CrossRef]
  32. Martinez, L.; Ferreira, A. Análise dos dados com SPSS. Primeiros passos. Lisboa, 2007.
  33. Cattel, R. The ccree test for the number of factors. Multivariate Behavioral Research 1966, 1, 245–276.
Table 1. Results of the AI Literacy questionnaire.
Table 1. Results of the AI Literacy questionnaire.
Factor 1
(totally disagree)
2
(somewhat disagree)
3
(neither disagree nor agree)
4
(somewhat agree)
5
(totally agree)
Mean values
AI Literacy Use and apply AI 3.6% 10.0% 18.4% 33.6% 34.4% 3.85 3.56
Know and understand AI 4.3% 14.1% 27.2% 40.5% 13.9% 3.46
Detect AI 5.3% 19.1% 34.7% 32.4% 8.4% 3.20
AI Ethics 4.0% 9.3% 21.8% 39.1% 25.8% 3.73
AI Self-Efficacy Problem Solving 6.7% 14.2% 40.9% 25.8% 12.4% 3.23 2.86
Learning 18.7% 33.8% 31.6% 12.0% 4.0% 2.49
AI Self-Management AI Persuasion Literacy 4.9% 16.0% 40.0% 25.3% 13.8% 3.27 3.41
Emotion Regulation 4.9% 6.2% 38.2% 30.7% 20.0% 3.55
Total 6.7% 15.6% 32.0% 30.4% 17.2% 3.28
Table 2. Item-Total Statistics.
Table 2. Item-Total Statistics.
Item Scale Mean if Item Deleted Scale Variance if Item Deleted Corrected Item-Total Correlation Cronbach’s Alpha if Item Deleted
Use and apply AI_1 94.55 289.278 0.395 0.929
Use and apply AI_2 94.59 289.111 0.406 0.929
Use and apply AI_3 95.00 283.405 0.492 0.928
Use and apply AI_4 94.75 286.273 0.462 0.928
Use and apply AI_5 94.92 287.507 0.417 0.929
Use and apply AI_6 95.88 288.539 0.320 0.931
Know and understand AI_1 95.51 278.632 0.663 0.926
Know and understand AI_2 95.36 276.396 0.736 0.925
Know and understand AI_3 95.36 278.828 0.671 0.926
Know and understand AI_4 95.39 277.267 0.674 0.925
Know and understand AI_5 95.11 280.772 0.633 0.926
Detect AI_1 95.53 278.793 0.662 0.926
Detect AI_2 95.76 277.023 0.705 0.925
Detect AI_3 95.52 287.415 0.436 0.929
AI Ethics_1 95.23 284.799 0.464 0.928
AI Ethics_2 94.95 279.889 0.609 0.926
AI Ethics_3 95.03 279.215 0.622 0.926
Problem Solving_1 95.32 275.626 0.728 0.925
Problem Solving_2 95.68 279.302 0.616 0.926
Problem Solving_3 95.71 277.994 0.718 0.925
Learning_1 96.16 274.812 0.770 0.924
Learning_2 96.4 277.865 0.675 0.925
Learning_3 96.37 276.156 0.724 0.925
AI Persuasion Literacy_1 95.31 292.405 0.264 0.931
AI Persuasion Literacy_2 95.48 292.55 0.261 0.931
AI Persuasion Literacy_3 95.8 287.189 0.402 0.929
Emotion Regulation_1 95.41 289.894 0.300 0.931
Emotion Regulation_2 95.28 289.502 0.373 0.929
Emotion Regulation _3 95.07 286.09 0.465 0.928
Table 4. Correlation between Factors.
Table 4. Correlation between Factors.
Component 1 Component 2 Component 3 Total
Component 1 1 0.766** 0.425** 0.949**
Component 2 1 0.501** 0.889**
Component 3 1 0.636**
Total 1
**. Correlation is significant at the 0.01 level (2-tailed).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated