1. Introduction
Hearing plays a fundamental role in children’s language and speech development, learning, and communication throughout the life course [
1]. In this sense, hearing loss or hearing loss can affect a person’s quality of life in multiple areas, such as the social, educational, psychological, and work spheres, among others [
2,
3,
4,
5,
6].
Data published in the World Hearing Report indicate that currently, 1.5 billion people live with some degree of hearing loss. At the same time, by the year 2050, it is estimated that around 2.5 billion people will have hearing loss [
7]. Based on these figures and the magnitude of the problem, public health interventions have been proposed for those suffering from hearing loss and ear diseases to guarantee high-quality universal access in this area [
7]. The interventions contemplate hearing checkups throughout life for groups at higher risk, where elder people, newborns, infants, and children in preschool and school stage stand out and people exposed to noise, medicines, and ototoxic chemicals.
Universal Health Coverage (UHC) establishes that for these interventions to be carried out, it is necessary to have strengthened health systems where there is access to safe and high-quality diagnostic equipment; as well as, there is personnel who have the necessary training in the required discipline [
8]. In this same sense, since the 1980s, the World Health Organization (WHO) has proposed guiding universities and higher education institutions towards a new approach in the training of future health professionals, more focused on the process of learning and oriented to the development of competences where knowledge, skills, and attitudes highly linked to social reality are integrated and not superficial and rote learning [
9].
In higher education, health sciences majors have sought innovations leading to a review of the teaching curriculum in their respective disciplines so that future graduates carry out a comprehensive approach focused on people and health rather than on medicine and disease [
10,
11]. Clinical simulation as a teaching methodology is introduced into the academic training of health career students, who, before having experiences with actual patients, obtain guided experiences that interactively emulate situations, environments, and problems similar to the contexts of healthcare establishments in a safe, controlled environment that ends with a reflection process [
12].
One of the disciplines that perform the intervention related to hearing screening throughout life is speech therapy, which corresponds to the science in charge of the evaluation, diagnosis, rehabilitation, health promotion, and prevention of language, speech, swallowing, hearing, voice, and communication, as rescued from the proceedings of the XXV Congress of Speech Therapy, Phoniatrics and Audiology [
13]. Likewise, speech therapy considers areas or dimensions of quality related to equity, access and opportunity, continuity, safety, technical quality, user satisfaction, efficacy, and efficiency [
14]. For this reason, the importance of training health professionals with theoretical and procedural competencies also applies to this discipline.
Within speech and language therapy, there is an area of audiology in charge of promoting hearing health, prevention, evaluation, diagnosis, intervention, and monitoring of pathologies related to hearing. In audiology, one of the relevant aspects is audiometry, a delicate and precise procedure that must be applied with expertise by the professional. To achieve this expertise and thereby contribute to achieving quality in the service based on the health guarantee model, undergraduate students today have clinical simulation tools that allow them -in this safe environment described above- to recreate clinical evaluations in a controlled and standardized environment. Seeking to facilitate the development of competencies in the audiometry procedure and hearing loss classification in undergraduate students, SAEF (Audiometry Simulator for Speech-Language Students) is available as part of a Speech-Language Pathology project [
15].
RQ1 [Acceptability of SAEF] How do students and educators accept SAEF to develop audiometric examination procedural competencies and skills?
RQ2 [Functional Validity of SAEF] How does SAEF functional validation allow the development of procedural competencies and skills with the audiometric examination?
This article summarizes the main features of SAEF, an open-source Java application, and looks to validate the user satisfaction and quality of experiments in SAEF. This work is organized as follows. The following section details the SAEF tool: source, goals, and evolution. Then,
Section 3 describes the applied methodology: student characteristics, population and sample, data collection, and procedures for analysis. After,
Section 4 details each applied survey together with their results.
Section 5 summarizes the main positive impact of the tool SAEF on the students and related university community, overall for the autonomous learning by its use, without requiring an audiometer for developing audiology competencies, a relevant virtue, overall in pandemic time. In the end,
Section 6 presents the main conclusions.
3. Methodology
Looking to answer RQ1 and RQ2, this work structures its methodology as follows: first, it describes the methodological characteristics of the study carried out; second, it provides details of the research participants; third, it specifies the instruments used for data collection; and finally, it describes the techniques used for the analysis of the collected data.
3.1. Study characteristics
The present study was based on quantitative methodology because the researchers collected data and evaluated hypotheses based on numerical measurement and statistical analysis. The design was cross-sectional and non-experimental because the measured variables were not manipulated, and the assessment was performed only once. This study employs a descriptive-comparative methodology first to demonstrate the usability of SAEF and, secondly, to determine the application’s validity according to audiology professors. Finally, this study compares the outcomes and the equipment employed.
3.2. Population and sample
This work used a non-probabilistic or directed sample of homogeneous participants; the selected units have characteristics in common or similar features. In this case, students and professors of the Santo Tomás University, Chile’s speech-language pathology program interacted with SAEF as users. Additionally, students and professors who were studying and teaching the subject of Audiology in a theoretical or practical way, respectively, in the house of studies aforementioned. There were no exclusion criteria. The participants were recruited through the extended invitation of subject coordination, and the student and professor users voluntarily participated in the study. All participants signed informed consent.
The sample consisted of 43 users, divided into 31 students and 12 professors from different Chilean cities: Iquique, Viña del Mar, Santiago, Talca, Concepción, Osorno, and Puerto Montt. The 31 students answered the survey according to the TAM model (Theoretical Extension of the Technology Acceptance Model) [
20], while the 12 professors answered the survey aimed at the technical validation of the audiometric procedure. In times of pandemic, the direct use of TAM or its extensions represents a standard for measuring the acceptance of technology in contexts such as health [
21], education [
22], and the government [
23].
3.3. Data collection instruments
Through the Google Forms platform, the student users received the scale of measurements and reliability based on the theoretical extension of the Technology Acceptance Model, as shown in
Table 1. This scale includes 26 statements about the usability and usefulness of a technological resource in terms of social influence and cognitive instrumental processes. The 26 statements are distributed in 9 categories: Intent of Use, Perceived utility, Perceived ease of use, Subjective norm, Volunteering, User interface, Job relevance, Output quality, and Results Demonstrability. The user had to answer how much they agree or disagree with each of the statements according to the following Likert scale: 1 = Totally disagree, 2 = Quite disagree, 3 = Disagree, 4 = Not at all in agreement - I do not disagree, 5 = Agree, 6 = Somewhat agree, 7 = Totally agree. The scale was applied in 3 moments: immediately after the training on using SAEF, after one month of use, and finally, three months after the planned period of use.
A survey was applied to the teaching users to validate the audiometric Procedure technique and the subject’s learning results (see
Table 2). This survey was prepared by the authors following the recommendations of ASHA [
24] that guide the Procedure for the execution of the search for hearing thresholds. The user had to answer how much they agree or disagree with each of the statements regarding whether SAEF actually emulates each of the proposed steps according to the following Likert scale: 1 = Totally disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, 5 = Totally agree. Three categories classify the surveyed statements: Hearing threshold, Student performance, and Procedure carried out based on learning results. The final SAEF version considers each feedback provided by SAEF on student performance and its usefulness in achieving the learning outcomes.
3.4. Analysis procedure
After administrating the TAM scale and the technical validation survey, a database was created to be analyzed using the SPSS statistical package, version 21.0. Descriptive statistics were used to present the results of the TAM scale responses. On the other hand, the Friedman test [
25] was used to compare the results of each TAM scale question at the three moments of application and Cronbach’s alpha test [
26] to determine the level of agreement between the experts and the reliability of the scale used. For all analyses, this work used a significance level of 0.05. The Friedman Test test permits determining if statistically significant differences exist between three or more dependent samples [
27].
4. Results
For the analysis of the scale of measurements and reliability based on the theoretical extension of the Technology Acceptance Model,
Table 3 presents the results of the responses in a descriptive manner. Favorable responses were calculated: those found to be at or above the acceptance threshold (this article considers the last three values of the scale: 5 = Agree, 6 = Fairly agree, 7 = Totally agreement).
Table 3 shows the acceptability of SAEF v.2 at each of the moments in which the survey was applied, for which the total responses considered favorable were divided by the total responses for all evaluations. Notably, the number of participants who answered the questions in the three evaluation moments was not constant. Hence, the proportions show slight variations in the numerator and denominator.
Table 4 compares the users’ acceptability of SAEF (questions of
Table 1) using the TAM scale through the Friedman test [
26,
28]. These results are for the three moments of the test application: immediately after the training on the use of SAEF (Acceptability T1), after one month of the use of SAEF (Acceptability T2), and three months after the use of SAEF (Acceptability T3). This table presents the changes that exceeded the critical threshold of significance of 0.05 [
29].
On the other hand, in the analysis of the technical validation of the audiometric procedure and the learning results of the subject, the level of agreement among the experts on the aspects evaluated using Cronbach’s alpha, a technique widely used for this purpose [
26,
30], reached a value of 0.62. Although this value is considered low for reliability analysis, the nature of the questions explains the lower result concerning the optimum for the test. Analyzing the evaluators’ agreement was to explore if the answers needed to be more diverse, which was not the case.
It is important to remark on the evolution in the users’ acceptability of SAEF in the surveyed times: 0.701653846 in Acceptability T1, 0.754230769 in Acceptability T2, and 0.800423077 in Acceptability T3. Like the research of cite Abbas et al. [
31], Alamri et al. [
32], and Lozano et al. [
33], we can apply the t-student test to validate these results. In this case, we define the next question: ¿Can students’ satisfaction increase with using SAEF over time? Hence, the null hypothesis
is that student satisfaction does not increase during the time, whereas the alternative hypothesis
is that student satisfaction increases during the time. With a confidence level of 95%,
=
.
Table 5 and
Table 6 show the t-student results that reject the null hypothesis and accept the alternative one.
Figure 3 illustrates the acceptability evolution for each of the nine categories.
Considering that the survey applied to professionals uses a Likert-type scale, the results of the questions reflect the level of acceptance, that is, answers whose value is 4 or 5 according to the scale: 1. Strongly disagree; 2. Disagree; 3. Neither agree nor disagree; 4. Agree; 5. Strongly agree.
Table 7 presents details of the percentages of the responses "Agree" and "Strongly agree" of the professors’ participants based on the aspects related to the (1) Hearing threshold search procedure, (2) Student performance feedback and (3) Procedures carried out based on learning results.
Figure 4 summarizes those results. We can appreciate a high acceptability percentage in each evaluated aspect: 100% in the Hearing threshold search procedure, 84.71666667% in the Student performance feedback, and 97% in the Procedures carried out based on learning results.
Figure 5,
Figure 6 and
Figure 7 show some of the functional and interface improvements made that are part of SAEF version 2 [
34]. Specifically,
Figure 5 offers optimization from the point of view of user interface design to include an option to select the difficulty level, and
Figure 6 permits selecting the case to work, functionalities did not present in SAEF v.1. SAEF v.1 performed a random assignment of case studies without considering the level of difficulty. As
Figure 7 shows, the interface of SAEF v.2 improved the visual aspects to be more attractive to users, along with optimizing the distribution of elements to provide more remarkable similarity to the actual instrument. SAEF v.2 offers a better response display and validity for the last user action performed. Patient response latency times were also adjusted and parameterized to allow the student to decide on the following examination procedure. SAEF v.2 continues to be a Java application thanks to its wide diffusion due to its cross-platform execution and open source and its compatibility with multiple tools, including MySQL.
5. Discussion
The results show that students and professors users in the area accept SAEF. Specifically, SAEF demonstrated high acceptance in the constructs of intention to use, perceived usefulness, perceived ease of use, subjective norm, volunteering, image, job relevance, output quality, and results from demonstrability. In addition, this article shows statistically significant differences in the aspects of ease of use and demonstrability of the results, representing a high-satisfaction user experience using SAEF. Those results allow positive answering
RQ1 and
RQ2. These results align with what was indicated by Venkatesh and Bala [
35], who theorize that two factors determine an individual’s behavioral intention to use a system: perceived ease of use and perceived utility. These factors represent the belief that using the system will be effortless and that using the system will improve their work and educational performance [
36].
The Covid-19 pandemic accentuated the need for virtual tools to teach certain content and procedures in different disciplines [
37,
38,
39]. From the same perspective, Information and Communication Technologies have transformed teaching-learning processes in higher education, incorporating technological resources as pedagogical and didactic elements for professors and students. However, evaluating the quality of these technologies is crucial since their incorporation does not guarantee the success of the teaching processes and the consequent development of competencies defined in the study programs [
40]. In this sense, the TAM model used in this research aims to predict the acceptance of a system and diagnose design problems. That model represents a robust, solid, and detailed model to predict users’ acceptance of information technology [
41].
Finally, regarding other applications similar to SAEF, as reported in Orellana et al. [
15], there are no freely accessible digital tools to develop speech-language pathology skills, specifically for developing audiometric examination procedural skills. In this way, SAEF represents a tremendous academic contribution to the Santo Tomás University in Chile and other study houses that would like access to this application. The authors plan to develop a web and mobile version of SAEF. Those versions would expand the case studies and include a ranking of time of use and efficiency for developing the cases.