1. Introduction
In an era defined by rapid technological advancement and a growing awareness of sustainability, the intersection of innovation and responsibility has never been more crucial. Technology, with its ever-expanding capabilities, has woven itself intricately into every aspect of modern life, revolutionizing industries, shaping economies, and altering social dynamics. At the forefront of this technological revolution stands artificial intelligence (AI), a field characterized by its ability to analyze vast amounts of data, derive insights, and even make autonomous decisions. Among the several manifestations of AI, GPT (Generative Pre-trained Transformer) models represent a peak of natural language processing, a very useful tool even in the educational context [
1]. Therefore, in today's landscape, fostering AI literacy among students is not only about preparing them for the digital future, but also about equipping them with the tools to navigate complex ethical issues and contribute to sustainable development goals.
The authors of [
2] conducted an exploratory review to establish a theoretical foundation for defining, teaching, and evaluating AI literacy. By analyzing 18 peer-reviewed articles, the study proposed four key aspects for fostering AI literacy: knowing and understanding AI, using AI, evaluating AI, and addressing ethical issues. The research highlighted the importance of incorporating AI literacy into education to equip students with the necessary skills and knowledge to navigate the increasingly AI-driven world. M. Kasinidou [
3] emphasizes the importance of promoting AI literacy for all individuals, including children, educators, and adults, through participatory design approaches. The study aims to understand public perceptions of AI and develop effective educational activities to enhance AI literacy among diverse groups.
The research on the AI literacy questionnaire [
4] utilized exploratory and confirmatory factor analyses to identify underlying dimensions of AI literacy among secondary students. The study highlights the importance of affective, behavioral, cognitive, and ethical learning dimensions in assessing students' AI literacy development. The findings suggest a comprehensive approach to measuring AI literacy skills and provide valuable insights for educators and researchers aiming to enhance students' proficiency in AI technologies. M. Laupichler et al. [
5] developed the "Scale for the assessment of non-experts' AI literacy" and identified key factors influencing non-experts' AI literacy, such as technical understanding and awareness of ethical issues. They identified three key factors influencing AI literacy: Technical Understanding, Critical Appraisal, and Practical Application.
The authors of [
6] evaluated AI literacy courses for university students, demonstrating significant gains in conceptual understanding of machine learning and deep learning concepts. The courses successfully enhanced participants' self-perceived AI literacy and empowerment, emphasizing conceptual building blocks over mathematical and programming aspects. The paper "Explicating AI Literacy of Employees at Digital Workplaces" [
7] explores the dimensions of AI literacy and identifies key capabilities essential for employees. It highlights technology-related, work-related, human-machine-related, and learning-related capabilities crucial for fostering AI literacy among non-technical professionals.
The study by I. Celik [
8] highlights the significant impact of computational thinking on enhancing AI literacy among higher education students. By investigating the relationships between AI literacy, digital divide, computational thinking, and cognitive absorption, the research emphasizes the importance of integrating computational thinking skills in educational programs to foster AI literacy among students. [
9] emphasizes the importance of integrating AI concepts into STEAM education through iterative design moves based on learning theories, highlighting the feasibility of AI literacy without programming skills and the role of scaffolding in supporting student learning, while the authors of [
10] demonstrate successful integration of AI into maker education, enhancing students' understanding of machine learning concepts and promoting higher cognition levels in AI literacy development. Another study [
11] reveals that AI literacy positively influences secondary school students' computational thinking efficacy in learning AI, mediated by their approaches to learning AI. The findings emphasize the importance of utilizing suitable learning approaches to enhance students' understanding of AI.
The study by Wang et al. [
12] defines the concept of AI literacy and develops a 12-item scale to measure user competence in using artificial intelligence. Their research highlights the positive correlation between AI literacy and users' attitudes, daily usage, and proficiency in AI technology. [
13] emphasizes the importance of a stakeholder-first approach in AI education, highlighting the need to prioritize understanding and reflection on the societal impact of AI. By focusing on contextualized knowledge and adapting learning strategies, the study aims to enhance AI literacy among diverse audiences. D. Long [
14] provides a comprehensive exploration of AI literacy competencies and design considerations, offering valuable insights for educating non-technical learners about AI.
Compared to existing studies on AI literacy, the Theoretical-experiential binomial for Educating AI-Literate Students offers a novel approach that seamlessly integrates theoretical knowledge with hands-on experimentation, fostering a deeper understanding of AI concepts. By leveraging PSoC 6 microcontroller technology, this framework not only equips students with essential AI skills but also emphasizes sustainability by preparing them to address societal challenges through responsible AI applications. Unlike traditional programming-centric approaches, this methodology emphasizes real-world problem-solving, critical thinking, and creativity, empowering students to navigate the complexities of AI in sustainable and ethically responsible ways. Through experiential learning and practical application, this framework facilitates a holistic understanding of AI, ensuring that students are not only proficient in AI technologies but also conscious of their environmental and societal implications, thus contributing to the advancement of sustainable development goals.
2. Materials and Methods
Experiential learning holds significant importance in education due to its ability to actively engage students in the learning process, leading to enhanced understanding, retention, and application of knowledge [
15]. By involving students in hands-on experiences, simulations, and real-world applications, experiential learning goes beyond traditional passive learning methods, such as lectures and readings. This approach allows students to connect theoretical concepts to practical situations, fostering critical thinking, problem-solving skills, and creativity. Additionally, experiential learning promotes a deeper level of understanding and long-term retention of information by immersing students in meaningful learning experiences. By catering to diverse learning styles and preferences, experiential learning creates a dynamic and interactive learning environment that motivates students to take ownership of their learning journey. Ultimately, experiential learning plays a vital role in preparing students for real-world challenges, equipping them with the skills and knowledge needed to succeed in their academic and professional endeavors. T.H. Morris [
16] conducts a thorough examination of Kolb's experiential learning model [
17], highlighting the need for empirical testing and potential revisions to enhance its applicability.
The current study employs a structured approach delineated into five main sections to elucidate the process of fostering AI literacy among students, encapsulated within the theoretical-experiential binomial framework. The process for this complete learning cycle is illustrated in
Figure 1.
As the diagram shows, the educational framework proposed herein delineates a structured approach comprising five sequential stages to facilitate comprehensive learning and skill acquisition in Artificial Intelligence (AI):
Theoretical Fundamentals: This initial stage acquaints students with foundational concepts in Machine Learning (ML) and Deep Learning (DL). Through theoretical instruction, students develop a solid understanding of ML algorithms and DL neural networks, laying the groundwork for subsequent practical applications.
AI Experiments: Building upon the theoretical foundation established in Stage 1, students progress to engaging in a series of AI experiments (Experiment 1 to Experiment n). These experiments serve as experiential learning modules, enabling students to apply theoretical knowledge to real-world scenarios and tasks.
Critical Thinking: As students advance through the experimental phase, critical thinking skills are cultivated. They learn to analyze data, challenge assumptions, pose insightful questions, and reflect on their thought processes. This critical thinking capacity enhances their ability to discern patterns, evaluate outcomes, and make informed decisions in AI experimentation.
Experimentation and Introspective Analysis: In this stage, students delve deeper into experimentation, conducting iterative analyses and fostering introspection. Through hands-on exploration and reflection, they cultivate wisdom in navigating the intricacies of AI systems. This introspective analysis aids in refining experimental methodologies and optimizing AI solutions.
Creativity and Implementation: The final stage emphasizes the development of creativity and innovation in AI implementation. Equipped with a comprehensive understanding of theoretical principles, critical thinking skills, and practical experimentation experience, students are empowered to conceptualize novel solutions and think innovatively. They gain proficiency in designing and implementing experiments tailored to specific objectives, demonstrating versatility and adaptability in addressing diverse AI challenges.
This structured progression through five distinct stages ensures a holistic educational experience, equipping students with the requisite knowledge, skills, and mindset to navigate the complexities of AI applications effectively. By integrating theoretical instruction with hands-on experimentation and fostering critical thinking and creativity, this educational framework promotes AI literacy and empowers students to become adept practitioners in the field of Artificial Intelligence.
As described previously, in the second stage of the educational framework, several AI experiments were conducted, culminating in the development of various applications. Among these, two pivotal experiments are highlighted in this paper for their significance in AI literacy. The first experiment focuses on the detection and classification of musical notes, speech, and background noise, showcasing the capability of AI algorithms to discern auditory signals in real-time environments. The second experiment centers on human activity recognition, wherein AI models are trained to recognize and categorize diverse human movements and behaviors with high accuracy. These experiments exemplify the practical application of theoretical concepts in real-world scenarios, providing valuable insights into the capabilities and limitations of AI technology in addressing multifaceted challenges across different domains.
2.1. Detecting Musical Notes, Speech, and Background Noise
The technology for extracting musical information, still under development, is an important component of music technology, with increasingly integrated methods of Artificial Intelligence in this field. Note detection and recognition represent a branch of musical information extraction and constitute a significant research theme in the domain of audio signal analysis and processing [
18]. While recurrent neural networks are typically preferred for time series, fully connected neural networks are often favored for Edge devices, such as PSoC 6 (Programmable System on a Chip), due to their parallelism energy and computational efficiency [
19].
In this context, the developed experiment proposes automatic real-time detection of musical notes, speech, and background noise using a deep learning model based on a fully connected neural network. The experiment utilized the SensiML plugin, which aids in collecting data from PSoC 6 through attached sensors, also providing methods for labeling the captured data. The experiment consists of the following steps:
audio data acquisition and annotation
applying signal pre-processing techniques to the acquired data
designing and training a classification algorithm
implementing an intelligent model optimized for the IoT device.
The development of real-world Edge AI applications requires high-quality annotated data. The SensiML data capture application facilitates the collection, annotation, and exploration of sensor time series data, proving to be valuable even for students [
20].
2.1.1. Data Acquisition and Annotation
The data was acquired using the microphone on the CY8KIT-028-TFT shield. It contains a digital microphone with a Pulse-Width Modulation (PDM) output on a single bit, allowing the conversion of any acquired sound into a digital signal. The PSoC 6 device converts this digital signal into a quantized 16-bit Pulse-Core Modulation (PCM) value. An interrupt is triggered when there is sufficient data to be processed, specifically at least 128 samples.
Furthermore, the students were instructed that to avoid bias, it is important for the acquired audio data to be as clean as possible and to ensure a wide diversity (musical notes from various instruments, different voices in speech, etc.).
After programming the data acquisition application on the PSoC 6 device, data was acquired at a sampling rate of 16,000 Hz. Numerous segments were saved for each of the following musical notes: D, E, F, G, A, B. Upon completion of data acquisition, students were instructed on how to label the data in the Data Capture Lab (
Figure 2).
In addition to musical notes, multiple files containing speech and ambient noise audio data were acquired and labeled. Furthermore, after acquisition, the data is automatically uploaded to the Cloud. Through the Cloud portal, the data can be visualized, and all labels as well as their distribution can be analyzed. To ensure that students gain the necessary skills to build an appropriate dataset, a similar number of segments were created for each note, while for speech and noise, more segments were generated, considering their greater variety and more complex characteristics.
2.1.2. Machine Learning Model Design and Training
To build a classification deep learning model, a Tensorflow Lite pipeline for Microcontrollers was implemented.
The next step involves adding a filter and configuring the entire Pipeline. In this stage, the following elements were configured, with students assimilating the meaning and role of each parameter (
Figure 3):
Windowing: Segmentator of size 400 - takes input from the sensor transformation/filter step.
Frequency domain feature generator - a collection of feature generators that process the data segment to extract meaningful information.
Data balancing: Undersample Majority Classes - creates a balanced dataset by undersampling the majority classes using random sampling without replacement.
Feature quantization: Min-max scaler - normalizes and scales the data to integer values between min_bound and max_bound.
Outlier filter: Z-score filter - filters feature vectors that have values outside of a limit threshold (threshold set at 3).
Classifier: TensorFlow Lite for Microcontrollers - takes a feature vector as input and returns a classification based on a predefined model.
-
Training algorithm: Fully connected neural network, which includes the following features:
Dense layers of sizes (number of neurons) 128, 64, 32, 16, 8
Learning rate of 0.01
Softmax activation for the final layer
Number of epochs: 4
Threshold of 0.8
Categorical cross-entropy loss function.
Validation: Stratified Shuffle Split - the validation scheme splits the dataset into training, validation, and testing sets with similar label distributions.
Validation parameters - accuracy, F1 score, sensitivity.
After optimization, a model is generated whose characteristics are graphically displayed (
Figure 4). As demonstrated by
Table 1, the model achieved very good performance indicators, both in terms of accuracy, sensitivity, and F1 score, as well as the size of the classifier.
2.1.3. Model Testing and Validation
For students to better understand and analyze the cases where the classifier predicted correctly or incorrectly, both the confusion matrix on the training set and one on the validation set were determined (
Figure 5).
From
Figure 4 and
Figure 5, it can be inferred that the model achieved very good and similar performances both on the training dataset (data that the model has previously seen, and provided during the training phase) and on the validation dataset, which the network has not seen before. The only erroneous predictions, also observed by the students, were regarding speech and only in specific cases - when the sound intensity is too low and there is background noise as well.
Multiple values were tested for the number of epochs, and through graphical analysis, it was concluded that the optimal number in terms of both training and prediction time, as well as accuracy, is 4.
Figure 6 displays the accuracy obtained at each training and validation epoch. This analysis revealed that after 4 epochs, both accuracy and loss do not significantly improve further.
Considering that the model performs very well on the test and validation data, it can be run by students in real time to validate its functionality on the PSoC 6 device.
Before deploying the machine learning model on the IoT device, predictions can be visualized in real-time in the Data Capture Lab. The application will run the model on the data captured in real time by the microphone connected to the PSoC 6 device. The classification results generated by the model are added to the history and can be graphically visualized in real-time, along with the latest prediction made (
Figure 7).
The application program on the PSoC 6 sends real-time predictions via UART, which can be visualized by students either through Putty or Open Gateway or on the TFT screen connected to the PSoC6 (
Figure 8). Through Open Gateway, a serial connection can be established to the COM port allocated to the IoT device. Additionally, during model creation, a JSON file was generated to map the class numbers to their given names through labels. After establishing the connection, in the testing mode, the current prediction is displayed along with the history of previous predictions.
The performances in this stage have proven to be as good as in the validation stage. The developed model can independently extract and learn the characteristics of audio signals and has better generalization capabilities than other models proposed in previous studies. The experiment presented demonstrates that the deep learning-based system can significantly improve prediction accuracy and efficiently detect musical notes. Thus, the main goal of this experiment was to provide students with the opportunity to learn and directly experience Artificial Intelligence concepts sustainably and interactively. By implementing this experiment, the aim was to highlight the sustainable benefits of using practice in AI education for students.
2.2. Classification of Human Activities by Edge Techniques
The field of human activity recognition has become one of the current research topics due to the availability of sensors and accelerometers, low energy consumption, real-time data transmission, and advancements in Artificial Intelligence, Machine Learning, and IoT. Through this, various human activities can be recognized, such as walking, running, sleeping, standing, driving, abnormal activities, etc. Students can further develop the experiment for widespread use in medical diagnosis, monitoring the elderly, and creating a smart home. Additionally, driving activity can also be recognized, enhancing traffic safety. The experiment was programmed and run on the PSoC 6 microcontroller, developed by Infineon.
As a general rule, the process of developing a model for recognizing human activities consists of four main stages (
Figure 9):
acceleration signal acquisition
data preprocessing
activity recognition (based on Deep Learning techniques)
user interface for transmitting and displaying the prediction.
This experiment accomplishes the classification of human activities based on motion sensor data (accelerometer and gyroscope). A model program on the IoT device was pre-trained on a computer using Keras and classifies a few common activities: stationary, walking, and running.
The operation of the application was described and explained to the students through a block diagram (
Figure 10). In an infinite loop, the IoT device reads data from a motion sensor (BMX160) attached to the PSoC 6 to detect activities. The dataset consists of orientation data on 3 axes from both accelerometer and gyroscope. A timer is configured to interrupt at 128 Hz. The interrupt handler reads all 6 axes via SPI and signals a data processing task when the internal buffer has 128 new samples. It applies an IIR filter and min-max normalization to 128 samples simultaneously. These processed data are then transmitted to the inference processor. The inference processor determines and returns the prediction confidence for each activity class. If the confidence exceeds an 80% threshold, the predicted activity is displayed on the UART terminal. This application utilizes FreeRTOS. Within it, a system task - the activity task, was defined and executed, which processes received data and forwards it to the ML model.
2.2.1. Data Acquisition
The data for training the machine learning model were collected from multiple users during various activities, using the BMX160 sensor attached to the PSoC 6, then labeled according to activity and saved in a CSV (Comma Separated Value) file. Prior consent was obtained from the users through an IRB agreement, and the acquired and stored data comply with the General Data Protection Regulation (GDPR). When saving new data or gestures, a Python script can be used, which takes parameters such as the activity name and the person collecting the data.
The collected data was graphically displayed for analysis and cleaning purposes (
Figure 11). To ensure the relevance of the data, students need to be aware that it should come from multiple individuals of diverse ages and anatomies.
2.2.2. AI Model Development
After data collection, a model was developed using that data - a step that includes both training and calibrating the model. For this problem, a neural network model was built in Python using the Keras library. Before the training stage, the entire process was presented to the students: data preprocessing, cleaning, and random splitting of the dataset into training, validation, and testing sets. Following this, the data was converted into a standardized format, activities that the model could classify were generated, and the final calibration of the model was performed.
The model was then trained using the collected data, employing the following features:
The confusion matrix was displayed graphically so that students could visualize the classification performance of the model (
Figure 12).
The weights and structure of the model were then saved in a file to be used by students for programming the IoT device, and its validation and calibration data were saved in a separate file. Performance indicators of the model are illustrated in
Table 2, and
Figure 13 presents the characteristics of the final model. The Convolutional Neural Network (CNN) model consists of two convolutional blocks and two fully connected layers.
Each convolutional block includes convolutional operations, including Rectified Linear Unit (ReLU) activation and dropout layer, with the addition of a flattened layer after the first block. The convolutional layers act as feature extractors and provide abstract representations of input sensor data in the feature map. They capture short-term dependencies (spatial relationships) in the data. In the developed network, features are extracted and then used as inputs to a fully connected network, using Softmax activation for classification.
Following this, the generated model was programmed by students onto the device for testing. The ML Configurator also shows the resources consumed by the model, which are suitable for any microcontroller. The model was validated on PC, yielding very good results for both 8x8 and floating-point quantization. Before being programmed onto the PSoC 6 device, the model needs to be checked to validate if it is optimized for the available hardware on that device. Following the validation performed in the ML Configurator, a 100% accuracy, and 0.01 prediction error were achieved (
Figure 14).
After programming the device, through the serial connection to the computer, it displays on the UART terminal the prediction in real-time, together with the confidence percentage for each class (
Figure 15).
3. Results and Discussion
This section summarizes the specific conclusions of this article and suggests opportunities and recommendations for further research. The research was conducted with the assistance of the Center for Valorization and Skills Transfer (CVTC) at Transilvania University of Brașov-Romania, in partnership with the Faculty of Electrical Engineering and Computer Science.
Following the implementation of the structured experimental learning process in the field of artificial intelligence (AI) according to the theoretical-experiential binomial, significant results were obtained in promoting literacy among students. The well-defined stages of this process represent an important step in enhancing students' understanding, practical skills, and critical thinking in the field of AI.
By implementing the proposed educational framework, the structured process in five distinct stages ensures a holistic educational experience. In the first stage, learning the theoretical fundamentals of AI provided students with a solid knowledge base in the realm of machine learning and deep learning. This allowed them to progress to the next stage, engaging in practical experiments in AI.
As they progressed through experiments, students developed critical thinking skills, analyzing data, questioning assumptions, and evaluating results. Experimentation and introspective analysis allowed students to gain a deeper understanding of the complexities of AI systems and identify ways to optimize them.
The final stage emphasized the development of creativity and innovation in implementing AI algorithms. Students were encouraged to conceptualize new solutions and approach challenges in the field of AI innovatively. They gained competencies in designing and implementing experiments tailored to specific objectives.
The two practical experiments presented in this study highlighted the direct application of theoretical concepts in real-world scenarios.
The first experiment focused on detecting and classifying musical notes, speech, and background noise. Through the use of a deep learning model, students assimilated the ability of AI algorithms to discern auditory signals in diverse environments, in real-time. This approach demonstrated the technology's potential to identify and interpret sound information accurately and efficiently. In addition to the technical performance of the deep learning model, the experiment had a significant educational impact, with students gaining a profound understanding of:
Data acquisition and annotation process: The importance of a diverse collection of high-quality and correctly labeled data was highlighted through practical exercises.
Signal pre-processing techniques: Filtering, normalization, and other techniques were implemented and analyzed, strengthening the theoretical knowledge.
Neural network design and training: Network architecture, training parameters, and model optimization were studied and tuned, providing valuable practical experience.
Implementing models on IoT devices: Hardware limitations and resource optimization were considered, preparing students for real-world challenges.
The second experiment aimed at recognizing human activities using edge techniques. By utilizing a machine learning model trained on data from an accelerometer and a gyroscope, students succeeded in classifying common human activities such as standing, walking, and running. This approach holds potential for various domains, including medical diagnostics and smart home solution development. Beyond the model's 96.2% accuracy, the experiment facilitated learning:
Principles of activity recognition: Students grasped the steps of data acquisition, preprocessing, classification, and user interface.
Development of convolutional neural network models: Specific architecture, model optimization, and validation were studied in a practical context.
Utilization of sensors and IoT devices: Direct experimentation with motion sensors, chip programming, and data interpretation was conducted.
Ethics and responsibility in AI: Considerations regarding data collection, privacy, and bias were successfully integrated into the learning process.
Through the analysis of AI models' performance on the PSoC 6 device, it has been demonstrated that these models can be efficiently implemented and provide reliable real-time results. The performance of the human activity classification model achieved very high accuracy, confirming the effectiveness of the proposed technologies in training students in the field of AI.
Despite the significant benefits brought by the experimental learning process in the field of Artificial Intelligence, certain limitations need to be addressed to enhance its efficiency and accuracy. One of the main limitations is the availability and quality of the datasets used in experiments. The quality of the data can significantly influence the model's performance and its ability to generalize to new scenarios. Moreover, access to adequate equipment and computational resources can be challenging for certain institutions or students.
Another important limitation is related to the complexity and abstraction of some theoretical concepts in the field of AI, which may be difficult for some students to grasp without proper educational support. Additionally, there may be difficulties in adapting theory to the practical requirements of different applications or areas of interest. To overcome these limitations, some improvements and adjustments to the experimental learning process in the field of AI are necessary. Firstly, it is important to pay increased attention to data collection and processing, ensuring that they are representative and of high quality. It is also essential to provide students with adequate resources and tools to conduct experiments efficiently.
Moreover, it is necessary to develop educational materials and teaching methods that facilitate the understanding of complex theoretical concepts in a more accessible and interactive manner. Integrating case studies and practical projects into the curriculum can provide students with additional opportunities to apply theoretical knowledge in relevant contexts and acquire practical skills in solving real-world problems in the field of AI. By adopting a balanced approach between theory and practice and providing adequate resources and support, the experimental learning process in the field of AI can be significantly improved, thereby preparing students for future challenges and opportunities in this continuously evolving domain.
Closer integration of collaboration and teamwork in the learning process could significantly enhance students' experience. Teamwork can facilitate the exchange of ideas and experiences among students, encouraging collaboration and creativity. This could better prepare students for the professional environment, where AI projects are often developed in multidisciplinary teams.
Thus, from a sustainability perspective, developing practical skills in the field of AI prepares students for successful careers in rapidly expanding fields. The use of IoT devices contributes to interactive learning, making the educational framework more flexible and adaptable to future needs.
To evaluate the proposed approach, a questionnaire consisting of 10 questions has been developed, addressing aspects such as the usefulness of educational materials, the relevance of practical experiments, student satisfaction, and the impact on the development of specific AI domain competencies. Data collection is currently underway, and the obtained results will be carefully analyzed to identify strengths and weaknesses, aiming for continuous improvement in the quality of the educational process. The questions and response options of this questionnaire are presented in
Table 3.
4. Conclusions
The theoretical-experimental binomial proves to be an efficient and sustainable approach for educating students in the field of artificial intelligence. The practical implementation of models on IoT devices reinforces students' understanding and allows them to experiment with theoretical concepts directly. The educational framework presented can be easily adapted to various application domains of artificial intelligence, contributing to preparing students for successful careers in rapidly expanding fields.
This approach has demonstrated a significant impact on student education, bringing about significant effects in:
Development of practical skills: Programming skills, data analysis, critical thinking, and problem-solving abilities have been cultivated through concrete experiments.
Consolidation of theoretical understanding: Practical implementation has reinforced theoretical concepts, providing a holistic perspective on AI.
Stimulating creativity and innovation: Practical experiments have allowed students to explore innovative solutions and personalize their projects.
Preparation for careers in AI: The acquired skills are essential for a successful career in the field of Artificial Intelligence.
The proposed educational framework offers a range of sustainable benefits, from access to interactive and industry-relevant learning experiences to the development of transferable skills adaptable to various AI application domains. The experimental approach fosters creativity and innovation, while the flexibility of the framework allows adaptation to diverse needs and available resources. Implementing this educational framework will provide students with quality preparation, turning them into competent and innovative professionals capable of contributing to societal progress.
Although the theoretical-experimental binomial is an efficient approach, there are few limitations to consider, such as dataset size, experiment complexity, and access to IoT devices.
Author Contributions
Conceptualization, H.A.M. and D.U.; methodology, C.S.; software, H.A.M.; validation, D.U., C.S..; formal analysis, H.A.M. and D.U..; investigation, H.A.M.; resources, H.A.M., C.S. and D.U.; data curation, H.A.M.; writing—original draft preparation, H.A.M.; writing—review and editing, D.U. and C.S.; visualization, D.U., C.S.; supervision, D.U.; project administration, C.S.; funding acquisition, D.U. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Acknowledgments
We would like to express our deep appreciation to the Cypress/Infineon company for providing us with free PSoC6 kits, facilitating this study.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Modran, H.A.; Chamunorwa, T.; Ursuțiu, D.; Samoilă, C. Integrating Artificial Intelligence and ChatGPT into Higher Engineering Education. In: Auer, M.E., Cukierman, U.R., Vendrell Vidal, E., Tovar Caro, E. (eds) Towards a Hybrid, Flexible and Socially Engaged Higher Education. ICL 2023. Lecture Notes in Networks and Systems, vol 899. Springer. 2024. [Google Scholar] [CrossRef]
- Ng, D.T.K.; Leung, J.K.L.; Chu, K.W.S.; Qiao, M.S. AI Literacy: Definition, Teaching, Evaluation and Ethical
Issues. Proceedings of the Association for Information Science and Technology, 58: 504-509, 2021. [CrossRef]
- Kasinidou, M. AI Literacy for All: A Participatory Approach. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2 (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 2023. 607–608. [Google Scholar] [CrossRef]
- Ng, D.T.; Wu, W.; Leung, J.K.; Chu, S.K. Artificial Intelligence (AI) Literacy Questionnaire with
Confirmatory Factor Analysis. 2023 IEEE International Conference on Advanced Learning Technologies
(ICALT), 233-235, 2023. [CrossRef]
- Laupichler, M; Aster, A.; Haverkamp, N.; Raupach, T. Development of the “Scale for the assessment of non-experts’ AI literacy” – An exploratory factor analysis. Computers in Human Behavior Reports, Volume
12, 2023, 100338, ISSN 2451-9588. [CrossRef]
- Kong, S.; Cheung, W.; Zhang, G. R: Evaluating artificial intelligence literacy courses for fostering conceptual
learning, literacy and empowerment in university students: Refocusing to conceptual building, Computers
in Human Behavior Reports, Volume 7, 2022, 100223, ISSN 2451-9588. [CrossRef]
- Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI Literacy of Employees
at Digital Workplaces. In IEEE Transactions on Engineering Management, vol. 71, pp. 810-823, 2024. [CrossRef]
- Celik, I. Exploring the Determinants of Artificial Intelligence (AI) Literacy: Digital Divide, Computational
Thinking, Cognitive Absorption, Telematics and Informatics, Volume 83, 2023, 102026, ISSN 0736-5853. [CrossRef]
- Relmasira, S.C.; Lai, Y.C.; Donaldson, J.P. Fostering AI Literacy in Elementary Science, Technology, Engineering, Art, and Mathematics (STEAM) Education in the Age of Generative AI. Sustainability 2023, 15, 13595. [Google Scholar] [CrossRef]
- Ng, D.T.K.; Su, J.; Chu, S.K.W. Fostering Secondary School Students’ AI Literacy through Making AI-Driven Recycling Bins. Educ Inf Technol, Springer, 2023. [CrossRef]
- Lin, XF. , Zhou, Y., Shen, W. et al. Modeling the structural relationships among Chinese secondary school students’ computational thinking efficacy in learning AI, AI literacy, and approaches to learning AI, AI literacy, and approaches to learning AI. Educ
Inf Technol, 2023. [CrossRef]
- Wang, B.; Rau, R.; Yuan, T. Measuring user competence in using artificial intelligence: validity and
reliability of artificial intelligence literacy scale, Behaviour & Information Technology, 1324-1337, 2023. [CrossRef]
- Domínguez Figaredo, D.; Stoyanovich, J. Responsible AI literacy: A stakeholder-first approach. Big Data &
Society, 10(2), 2023. [CrossRef]
- Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of
the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing
Machinery, New York, NY, USA, 1–16, 2020. [CrossRef]
- McCarthy, M. Experiential Learning Theory: From Theory To Practice. Journal of Business & Economics
Research (JBER), 14(3), 91–100, 2016. [CrossRef]
- Morris, T. H. Experiential learning – a systematic review and revision of Kolb’s model, Interactive Learning
Environments, 28:8, 1064-1077, 2020. [CrossRef]
- Kolb, D.A. Experiential learning: Experience as the source of learning and development, Second Edition.
Upper Saddle River, NJ: Pearson, 2015, ISBN 978-0-13-389240-6.
- Yue, Y. Detection in Music Teaching Based on Intelligent Bidirectional Recurrent Neural Network.
Hindawi Security and Communication Networks Volume 2022. [CrossRef]
- Brusa, E., Delprete, C., Di Maggio, L. Deep transfer learning for machine diagnosis: from sound and music
recognition to bearing fault detection. Applied Sciences, vol. 11, no. 24, 2021. [CrossRef]
- SensiML Data Capture Lab Documentation. Available online: https://sensiml.com/documentation/data-capture-lab/index.html (accessed on 20 February 2024).
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).