1. Introduction
1.1. Introduction
The rapid progression of technology has revolutionized the landscape of language learning, leading to the emergence of innovative instructional methods and tools, as evidenced by recent studies [
1,
2]. These advancements have paved the way for the exploration of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion as potential avenues for enhancing language acquisition and learner outcomes. The effects of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion have been investigated in recent studies [
3]. For instance, Al-Abdullatif et al. [
4] confirmed their positive impact on motivation and learning strategies. However, the influence of these technologies on language learning proficiency and self-regulated learning skills of the learners remains a gap in the literature. Review findings highlight the need for further exploration of psychological factors and their role in technology-enhanced language learning [
3,
5]. Thus, this study aims to investigate the impact of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion on the language learning proficiency and self-regulated learning skills of learners. Additionally, this research seeks to explore the psychological factors that influence technology-enhanced language learning, building upon the findings of previous studies [
3,
5]. By addressing these gaps in the literature, this study aims to contribute valuable insights to the field of technology-enhanced language learning and provide practical implications for educators and language learning practitioners.
Chatbot-based language learning support offers personalized and interactive language practice [
4], while adaptive learning algorithms can customize instruction to the unique needs of individual learners [
3,
6]. Virtual reality language immersion provides a simulated environment for authentic language experiences. It is imperative to comprehend the impact of these technologies on language learning proficiency and self-regulated learning skills in order to optimize language education in the digital era [
7,
8]. This study endeavors to enrich existing literature by furnishing empirical evidence on the efficacy of these technological interventions in English as a Foreign Language (EFL) learning. The increasing integration of technology in language learning environments necessitates a comprehensive understanding of the effectiveness of these interventions. By providing empirical evidence, this research aims to contribute to the advancement of knowledge in the field of language education and inform evidence-based practices.
Understanding the impact of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion is essential for educators and policymakers to make informed decisions regarding the integration of technology in language education [
5,
7]. The landscape of language learning has undergone a notable transformation in recent years, propelled by technological progress and the escalating demand for personalized and efficacious learning experiences [
8,
9]. Within this framework, the integration of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion has emerged as a promising strategy to augment the language learning proficiency and self-regulated learning skills of EFL learners [
10,
11,
12]. This study seeks to analyse the impact of these innovative technologies on EFL learners’ language acquisition, cognitive engagement, and autonomous learning behaviors [
13]. By delving into the effectiveness of chatbot-based support, adaptive algorithms, and virtual reality immersion, this research aims to provide valuable insights into the ongoing discourse on technology-enhanced language education and its potential to optimize language learning outcomes for EFL learners.
The landscape of language learning has undergone a profound metamorphosis due to the integration of cutting-edge technological advancements. This transformation has precipitated a notable shift in instructional methodologies and learning paradigms, marking a pivotal juncture in the evolution of language education [
14]. These scholarly inquiries have accentuated the imperative need for pedagogical approaches that cater to individual learner needs and foster meaningful engagement with the learning process. The integration of cutting-edge technology has not only transformed instructional methods but has also underscored the importance of tailoring educational approaches to meet the unique needs of each learner, thereby enhancing meaningful engagement with the learning process [
15].
The emergence of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion has garnered widespread attention as promising strategies to enhance language learning proficiency and cultivate self-regulated learning skills among language learners [
10,
13]. These innovative interventions have been positioned as transformative tools capable of reshaping the educational landscape, offering tailored and immersive language learning experiences. The scholarly discourse surrounding these technological advancements has revealed compelling evidence of their potential to deliver personalized and interactive language practice, individualized instruction, and authentic simulated language environments, thereby revolutionizing the pedagogical approach to language education [
16].
The theoretical framework of this study is primarily informed by cognitive load theory [
17] and human-computer interaction (HCI) principles [
18]. These theoretical perspectives offer a comprehensive lens through which to examine the impact of AI-enhanced learning tools, such as chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion, on EFL learners’ language acquisition, cognitive engagement, and autonomous learning behaviors. Cognitive load theory posits that learning is influenced by the cognitive load imposed on working memory during the learning process [
17,
18]. Through the integration of this theoretical framework, the study endeavors to investigate how the design of AI-enhanced learning tools can optimize learning experiences by effectively managing cognitive load and promoting efficient information processing. Additionally, HCI principles are considered in the context of the design and usability of AI-enhanced learning tools. This theoretical perspective informs the examination of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion as instruments for providing engaging and user-friendly learning experiences that facilitate language acquisition and skills development. By anchoring the theoretical framework in cognitive load theory and HCI principles, the study aims to offer valuable insights into the efficacy of AI-enhanced learning tools in improving language learning outcomes for EFL learners.
1.2. Problem Statement and Research Questions
The field of English as a Foreign Language learning presents a pressing challenge in providing personalized and adaptive learning experiences for students through traditional teaching methods [
19]. These methods often struggle to accommodate the diverse learning styles, paces, and preferences of EFL learners, resulting in a one-size-fits-all approach that may not effectively address individual needs [
20]. Furthermore, the development of language proficiency and self-regulated learning skills among EFL learners is hindered by limited opportunities for immersive language practice and individualized support [
21]. This limitation can impede learners’ ability to engage in authentic language use, self-assessment, and metacognitive strategies essential for autonomous learning [
22].
As a result, there is a critical need to explore innovative approaches that leverage emerging technologies to address these challenges and enhance the language learning experience for EFL learners [
18]. Chatbot-based language learning support offers the potential for personalized, interactive, and on-demand language practice, feedback, and assistance [
24]. Adaptive learning algorithms have the capacity to tailor learning materials, pacing, and assessments to the individual needs and preferences of EFL learners, promoting a more learner-centred approach [
25]. Additionally, virtual reality language immersion presents an opportunity for learners to engage in authentic, context-rich language experiences, overcoming the limitations of traditional classroom settings [
26]. The integration of these innovative approaches has the potential to revolutionize EFL learning by creating a more dynamic, personalized, and immersive language learning environment that fosters language proficiency and self-regulated learning skills among EFL learners.
The following research questions guided this study:
RQ1: Do EFL learners engaged with online instructors who rely on chatbot-based language learning support and adaptive learning algorithms attain significantly greater improvements in language learning proficiency, self-regulated learning skills, and the overall language learning experience than students in the comparison group? Are there significant differences by the type of virtual reality language immersion implementation?
RQ2: What are the learners’ perceptions of the effectiveness of virtual reality language immersion, chatbot-based support, adaptive learning algorithms, and the potential synergies and interactions between them in enhancing the language learning experience for EFL learners?
2. Materials and Methods:
2.1. Design, Participants, and Procedures
In this investigation, a pretest-posttest random assignment experimental design was implemented to evaluate the impact of integrating chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion on the language learning proficiency and self-regulated learning skills of EFL (English as a Foreign Language) learners. The participant pool consisted of 546 EFL students, comprising 278 female and 268 male individuals, enrolled in programs related to teaching English as a foreign language. The age range of the participants spanned from 20 to 27 years, with an average age of 24.33 years and a standard deviation of 3.7 years.
The sample was standardized by excluding outliers (n=12), and ethical guidelines pertaining to anonymity, informed consent, and confidentiality were strictly adhered to. The participants were randomly allocated to one of four groups:
experimental group 1 received 15 sessions of 90 minutes each with chatbot-based language learning support,
experimental group 2 received 15 sessions of 90 minutes each with adaptive learning algorithms for language acquisition,
experimental group 3 received 15 sessions of 90 minutes each with virtual reality language immersion experience, and
control group 4 underwent regular computer-assisted language learning activities without the incorporation of chatbot-based language learning support, adaptive learning algorithms, or virtual reality language immersion.
Depending on the experimental condition, each group received a specific type of treatment to investigate the impact of the intervention on the participants’ language learning proficiency and self-regulated learning skills. This approach aimed to discern the differential effects of chatbot-based language learning support, adaptive learning algorithms for language acquisition, and virtual reality language immersion on the participants’ language learning outcomes. The study sought to elucidate the comparative influence of these distinct interventions on the targeted learning objectives, thereby contributing to a comprehensive understanding of their respective impacts.
Participants (n=136) in Group 1 were granted access to a language learning chatbot with a diverse range of interactive functionalities meticulously crafted to enrich their language proficiency. The chatbot facilitated immersive simulated conversations, encompassing a spectrum of scenarios such as placing orders at dining establishments, soliciting directions, and engaging in informal dialogues, thereby faithfully replicating real-life conversational environments. Furthermore, it provided instantaneous translation and pronunciation feedback, thereby fostering the refinement of language precision and fluency. Additionally, participants had access to personalized language learning exercises tailored to their individual proficiency levels, comprising comprehensive grammar assessments, vocabulary drills, and listening comprehension activities. Moreover, the incorporation of voice recognition technology for pronunciation practice, natural language processing to generate contextually relevant conversational scenarios, machine learning algorithms for precise feedback, and seamless integration with language learning applications and platforms further enriched the capabilities of the AI tools employed within this experimental group. Additionally, the chatbot offered pronunciation analysis, language proficiency assessments, contextual language scenarios, adaptive learning pathways, interactive cultural insights, and gamified learning activities to provide a comprehensive and engaging language learning experience tailored to the specific needs of each participant.
The 136 participants in Group 2 were fully immersed in an adaptive language learning platform that leveraged advanced AI tools to optimize the language learning experience. The platform utilized natural language processing (NLP) to analyze speech patterns, dynamically adjust the complexity of reading and listening exercises, and deliver meticulously tailored grammar and vocabulary lessons. Furthermore, the platform’s personalized approach included the assessment of pronunciation and fluency to customize speaking exercises, adaptation of reading materials according to comprehension proficiency, and provision of specific grammar and vocabulary lessons targeting areas for improvement. Additionally, the platform employed sentiment analysis to measure participant engagement and customize content, machine learning models to predict individual learning patterns and customize content delivery, and integration with speech recognition technology for precise assessment of speaking proficiency. These tools were strategically integrated into the platform to provide a personalized and optimized language learning journey for each participant within Group 2. The utilization of NLP allowed for the analysis of speech patterns and dynamic adjustment of exercises, while Sentiment Analysis measured participant engagement and customized content. Furthermore, Machine Learning Models predicted individual learning patterns and tailored content delivery.
Participants (n=137) in Group 3 were immersed in a virtual reality language experience, engaging with advanced AI-driven tools to enhance their language skills. Within the virtual environment, participants interacted with AI-driven virtual characters, navigated real-life scenarios, and received immediate feedback on their language usage. They engaged in simulated conversations with virtual native speakers, practiced negotiating in a business setting, and immersed themselves in cultural activities to further develop their language proficiency. This immersive environment served as a safe and interactive space for participants to apply their language knowledge and receive instant feedback for improvement. The use of natural language processing facilitated real-time analysis of participants’ spoken language, while gesture recognition technology enabled interactive communication within the virtual environment. Additionally, AI-driven scenario generation based on participants’ proficiency levels and learning goals expanded the capabilities of the AI tools used in this group. Integration with speech synthesis further enhanced the experience by providing realistic virtual native speaker interactions. These advanced AI tools were strategically integrated into the virtual reality language immersion experience to provide a dynamic and immersive language learning environment for each participant within Group 3.
Finally, the control group (n=137) received a regular course without receiving any educational intervention. This control condition allowed us to investigate the natural progression of language learning without the influence of additional educational interventions. By comparing the language learning outcomes of the control group with those of the experimental groups, we were able to assess the effectiveness of the AI-driven language learning interventions implemented in Group 2 and Group 3. The control group’s participation provided valuable insights into the baseline language learning trajectory and served as a reference point for evaluating the impact of the advanced AI tools used in the experimental groups. This comparative analysis enabled us to gain a comprehensive understanding of the benefits and effectiveness of the AI-driven language learning experiences provided to participants in Group 2 and Group 3.
2.2. Instruments
2.2.1. Self-Regulated Learning Scale
The self-regulated learning scale used in this study was adapted from Şahin Kızıl and Savran [
27] and consisted of a nine-item scale. This scale was designed to measure the extent to which the participants engaged in self-regulated learning skills, focusing on aspects such as goal setting, time management, self-monitoring, and adaptive learning strategies. The questionnaire was administered to all participants at two key points in the study: prior to the commencement of the intervention sessions (pretest) and upon completion of the intervention sessions (posttest). The pretest data collection allowed for the baseline assessment of the participants’ self-regulated learning skills, while the posttest data collection aimed to capture any changes or improvements in these skills following their exposure to the different experimental conditions. The reliability coefficient estimate of the administered scale in this study was found to be .87. This data collection process provided valuable insights into the impact of the interventions on the participants’ self-regulated learning skills and contributed to the overall assessment of the study’s objectives (see
Appendix A).
2.2.2. Language Proficiency (Pretest-Posttest)
The language proficiency of the participants was assessed using a comprehensive language proficiency test at two key points in the study: prior to the commencement of the intervention sessions (pretest) and upon completion of the intervention sessions (posttest). The language proficiency test was designed to measure the participants’ proficiency in various language skills, including speaking, listening, reading, and writing. The pretest assessment provided a baseline measure of the participants’ language proficiency levels before exposure to the experimental conditions, while the posttest assessment aimed to capture any changes or improvements in their language proficiency following their participation in the intervention sessions. The language proficiency test utilized in this study consisted of a total of 100 items, with 25 items dedicated to each language skill section (speaking, listening, reading, and writing).
Each section of the test was scored separately, with a maximum score of 25 points for each language skill. The total score for the language proficiency test was calculated by summing the scores from all four sections, resulting in a maximum possible score of 100 points. The test items covered a range of linguistic competencies, including vocabulary knowledge, grammatical accuracy, comprehension of spoken and written language, and communicative fluency. The reliability and validity of the language proficiency test were established through rigorous piloting and validation processes. The reliability and validity of the language proficiency test were established through rigorous piloting and validation processes, which yielded reliabilities ranging from .72 to .78.
2.2.3. Participants’ Perceptions of the Administered Treatments
The study examined participants’ perceptions of the administered treatments through a structured feedback questionnaire. This questionnaire was designed to capture their experiences, satisfaction, and perceived effectiveness of the intervention sessions. It was administered to all participants following the intervention sessions (posttest) and aimed to gather both qualitative and quantitative data on their perceptions of the different treatment conditions. The questionnaire items covered topics such as the effectiveness of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion in facilitating language learning and self-regulated learning.
Participants were also asked to rate their satisfaction with the intervention content and delivery, perceived relevance of the materials, and overall experience with the intervention sessions on a 7-point Likert scale. Additionally, participants were encouraged to provide qualitative comments and suggestions for improvement, allowing for a comprehensive understanding of their experiences. The feedback questionnaire was designed to capture both objective and subjective aspects of the participants’ perceptions, providing valuable insights into the effectiveness and acceptability of the administered treatments. The data collected from the feedback questionnaire provided rich qualitative and quantitative information on the participants’ experiences and perceptions of the intervention sessions (see
Appendix B). The reliability test revealed that the subscales had reasonable reliabilities ranging from .69 to .78.
2.2.4. Interviews
To facilitate a free flow of ideas from the participants, an open-ended interview was developed to elicit participants’ perceptions of the efficacy of the administered educational intervention (see
Appendix C). Ten open-ended questions were used to interview experimental groups based on their experiences with the intervention sessions, language learning support, and immersion technologies. The interview questions were carefully crafted to explore topics such as the perceived impact of the interventions on language skill development, the effectiveness of specific learning support tools, and the overall satisfaction with the intervention content and delivery. We invited two raters to code interview data in order to obtain efficacy ratings of the educational intervention by triangulating questionnaire and interview data. The raters’ reliability was very high, with kappa coefficients ranging from 0.84 to 1.0. This approach allowed for a comprehensive assessment of the participants’ perceptions and experiences, integrating both qualitative and quantitative data sources.
The qualitative data obtained from the interviews provided rich insights into the participants’ experiences with the interventions, offering a deeper understanding of their perspectives and allowing for a more nuanced analysis of their feedback. The open-ended nature of the interview questions allowed participants to express their thoughts and reflections in their own words, providing valuable qualitative data that complemented the quantitative data collected through surveys and questionnaires. The quantitative summary of the data elicited by the interview was generated by coding the answers into four response categories (1 = no, 2 = partially no, 3 = partially yes, 4 = yes), allowing for a structured analysis of the qualitative responses. This coding process facilitated the extraction of patterns from the interview data, contributing to a comprehensive understanding of the participants’ perceptions and experiences.
2.3. Data Analysis
To address the first research question, ANOVA [
28] was employed to conduct statistical analyses comparing the language learning proficiency, self-regulated learning skills, and overall language learning experience of EFL learners engaged with online instructors using chatbot-based language learning support and adaptive learning algorithms with those in the comparison group. This multivariate approach provided a comprehensive examination of the impact of these instructional methods. Additionally, subgroup analyses using ANOVA were performed to discern any significant differences by the type of virtual reality language immersion implementation, allowing for a nuanced understanding of the specific effects of different immersion approaches.
In addressing the second research question, a rigorous qualitative inquiry was conducted. Critical thematic analysis [
29], underpinned by a constructivist paradigm, was utilized to uncover and interpret the multifaceted perspectives of EFL learners. This approach involved identifying recurring patterns within the qualitative data derived from interviews and open-ended survey responses. By delving into these qualitative insights, a deeper understanding of the learners’ perceptions regarding the effectiveness of virtual reality language immersion, chatbot-based support, adaptive learning algorithms, and their potential synergies and interactions in augmenting the language learning experience for EFL learners was achieved. The triangulation of quantitative and qualitative data provided further insights into the effectiveness of different instructional methods and their impact on EFL learners’ language learning experiences.
3. Results
Table 1 exhibits the descriptive statistics for the language proficiency pretest, language proficiency posttest, self-regulated learning scale pretest, and self-regulated learning scale posttest measures across distinct groups. The mean scores for the language proficiency pretest were 48.03 (Group 1), 47.81 (Group 2), 48.07 (Group 3), and 48.28 (Group 4). As for the language proficiency posttest, the mean scores were 93.39 (Group 1), 70.74 (Group 2), 70.83 (Group 3), and 47.93 (Group 4). In relation to the self-regulated learning scale pretest, the mean scores were 3.76 (Group 1) and 3.77 (Group 2). Furthermore, the standard deviation values offer insights into the dispersion of scores around the mean, with higher values signifying greater variability within each group. The standard deviation for the language proficiency pretest was approximately 5.07 across all groups, and for the language proficiency posttest, it was approximately 5.49 (Group 1), 4.57 (Group 2), 4.36 (Group 3), and 4.81 (Group 4).
Table 2 presents the outcomes of the analysis of variance conducted for language proficiency pretest, language proficiency posttest, self-regulated learning scale pretest, and self-regulated learning scale posttest measures. The ANOVA outcomes offer valuable insights into the variability and significance of group differences for each measure. For the language proficiency pretest, the ANOVA results reveal that the between-groups variation accounted for 15.120 units of the total sum of squares, with 3 degrees of freedom. The mean square value was 5.040, and the F-statistic was 0.195, leading to a non-significant p-value of 0.899. In contrast, the ANOVA findings for language proficiency posttest indicate a considerable between-groups variation of 141022.186 units of the total sum of squares, with 3 degrees of freedom. The mean square value was 47007.395, and the F-statistic was 2018.801, resulting in a highly significant p-value of 0.000. Similarly, for the self-regulated learning scale pretest, the ANOVA results demonstrate a between-groups variation of 2.038 units of the total sum of squares, with 3 degrees of freedom. The mean square value was 0.679, and the F-statistic was 0.435, leading to a non-significant p-value of 0.728. Finally, the ANOVA findings for self-regulated learning scale posttest indicate a substantial between-groups variation of 763.947 units of the total sum of squares, with 3 degrees of freedom. The mean square value was 254.649, and the F-statistic was 431.692, resulting in a highly significant p-value of 0.000.
Table 3 displays the ANOVA Effect Sizes for language proficiency pretest, language proficiency posttest, self-regulated learning scale pretest, and self-regulated learning scale posttest. The effect sizes are presented as point estimates along with their 95% confidence intervals, including the lower and upper bounds. For the language proficiency pretest, the effect sizes (Eta-squared, Epsilon-squared, Omega-squared Fixed-effect, and Omega-squared Random-effect) are reported with corresponding point estimates and confidence intervals as follows: Eta-squared - Point Estimate = .001, 95% Confidence Interval (Lower, Upper) = .000, .006; Epsilon-squared - Point Estimate = -.004, 95% Confidence Interval (Lower, Upper) = -.006, .000; Omega-squared Fixed-effect - Point Estimate = -.004, 95% Confidence Interval (Lower, Upper) = -.006, .000; Omega-squared Random-effect - Point Estimate = -.001, 95% Confidence Interval (Lower, Upper) = -.002, .000. Similarly, the effect sizes for the language proficiency posttest and self-regulated learning scale pretest and posttest are also presented.
These effect sizes provide insight into the proportion of variance in the dependent variable that can be attributed to the independent variable, with values closer to 1 indicating a larger effect. The 95% confidence intervals offer a range within which the true population effect size is likely to fall. It is important to note that Eta-squared and Epsilon-squared are estimated based on the fixed-effect model, and negative but less biased estimates are retained without being rounded to zero.
Table 4 presents the outcomes of the Tukey Honestly Significant Difference (HSD) test, which compares the “Language Proficiency Pretest” and “Language Proficiency Posttest” scores across different groups. The results include mean differences, standard errors, significance levels, and 95% confidence intervals for each comparison. In the “Language Proficiency Pretest,” Group 1 (Chatbot-based language learning support) exhibited a mean difference of 0.221 compared to Group 2 (Adaptive learning algorithms for language acquisition), with a standard error of 0.616 and a non-significant p-value of 0.984. The 95% confidence interval for this comparison ranged from -1.37 to 1.81. Similar statistics were reported for the comparisons between Group 1 and Group 3 (Virtual reality language immersion experience), Group 1 and Group 4 (Control Group), Group 2 and Group 3, Group 2 and Group 4, and Group 3 and Group 4. In the “Language Proficiency Posttest,” Group 1 demonstrated a substantial mean difference of 22.654 compared to Group 2, with a standard error of 0.585 and a highly significant p-value of <0.001, indicating a statistically significant difference between these two groups. It is noteworthy that Group 1 (Chatbot-based language learning support) showed the most significant influence compared to the other groups, as evidenced by the substantial mean difference and highly significant p-value in the “Language Proficiency Posttest.”
Qualitative analysis of the results revealed that the participants expressed high levels of agreement regarding the effectiveness of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion in enhancing various aspects of language learning and self-regulated learning. The participants indicated that chatbot-based support facilitated personalized language practice and feedback, while adaptive learning algorithms adapted to their learning pace and provided targeted exercises. Furthermore, virtual reality language immersion was perceived to create immersive and interactive language learning environments, enrich cultural understanding, and provide realistic and engaging language learning scenarios. The qualitative data from the interviews provided additional context and depth to the quantitative findings, offering valuable insights into the participants’ experiences and perspectives regarding the use of these technologies in language education.
The triangulation of QUAN-QUAL data revealed that the participants’ positive perceptions of the effectiveness of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion were supported by the quantitative results. This convergence of findings is consistent with the theoretical framework of constructivist and socio-cultural learning theories, which underpin the design and implementation of the technological interventions.
Specifically, the mean scores for the language proficiency posttest were notably higher for Group 1 (Chatbot-Based Support) compared to the other groups, aligning with the qualitative findings. This outcome suggests that the personalized and interactive nature of chatbot-based support may have contributed to more significant improvements in language proficiency. Additionally, the ANOVA results for language proficiency posttest indicated a significant between-groups variation, with Group 1 demonstrating a substantial mean difference compared to Group 2, providing statistical support for the qualitative perceptions. In contrast, the ANOVA results for language proficiency pretest did not show significant between-groups variation, consistent with the qualitative findings that focused on the perceived impact of the technological interventions on language learning outcomes rather than initial proficiency levels.
This triangulation highlights the convergence of both qualitative and quantitative findings, reinforcing the positive influence of chatbot-based support on language learning outcomes. The theoretical framework of constructivist and socio-cultural learning theories provides a lens through which to understand and interpret these findings, emphasizing the role of interactive and personalized learning experiences in language acquisition. This synthesis underscores the importance of integrating theoretical perspectives with empirical evidence to gain a comprehensive understanding of the impact of technological interventions on language learning outcomes.
4. Discussion
The results of this study provide compelling evidence of the impact of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion on the language learning proficiency and self-regulated learning skills of EFL learners. Notably, the findings underscore the significant influence of chatbot-based language learning support. The substantial mean difference and highly significant p-value indicate that personalized and interactive language practice facilitated by chatbot-based support has a pronounced impact on language proficiency outcomes for EFL learners. These results align with the growing body of literature emphasizing the potential of technology-driven personalized learning experiences in language education [
7,
15,
30].
In addition to the influence of chatbot-based support, the study’s findings offer valuable insights into the potential of adaptive learning algorithms and virtual reality language immersion [
31]. Although further research is necessary to fully elucidate their impact, the preliminary results suggest promising avenues for leveraging these technological interventions to optimize language education in the digital era. The customization of instruction through adaptive learning algorithms and the simulated authentic language experiences provided by virtual reality immersion hold promise for enhancing language learning proficiency and fostering self-regulated learning skills among EFL learners [
32]. The study’s contribution to the ongoing discourse on technology-enhanced language education is significant, as it addresses the evolving landscape of language learning and the increasing demand for efficacious and personalized learning experiences. By providing empirical evidence on the efficacy of these technological interventions, this research enriches existing literature and underscores the potential of technology-enhanced language education to optimize language learning outcomes for EFL learners [
33]. The findings of this study have implications for educators, curriculum developers, and policymakers, highlighting the need to integrate innovative technological approaches into language education to meet the diverse needs of EFL learners in today’s digital age.
5. Implications and Limitations, and Future Steps
The findings of this study have several implications for language educators, curriculum developers, and policymakers. The significant influence of chatbot-based language learning support on language proficiency outcomes suggests the potential for integrating personalized and interactive language practice into EFL language learning programs. Educators can leverage chatbot-based support to provide tailored language learning experiences that cater to the diverse needs of EFL learners, promoting engagement and motivation in language acquisition [
30,
34,
35].
Theoretically, the potential of adaptive learning algorithms and virtual reality language immersion in enhancing language learning proficiency and self-regulated learning skills highlights the need for further exploration and integration of these technologies into language education. Curriculum developers can consider incorporating adaptive learning algorithms and virtual reality immersion experiences to create immersive and adaptive language learning environments that foster autonomous learning behaviors among EFL learners [
7,
33].
Pedagogically, the insights from this study emphasize the transformative potential of technology-driven interventions in language education. Policymakers can use these implications to advocate for the integration of innovative technological approaches into language education policies and initiatives, aiming to enhance language learning outcomes and promote digital literacy among EFL learners. The study’s implications extend to the broader context of technology-enhanced education, emphasizing the potential of technology-driven interventions in transforming language education practices and addressing the evolving needs of learners in the digital age [
31].
While the findings of this study provide valuable insights, it is essential to acknowledge certain limitations. The generalizability of the results may be limited by the study’s specific context and sample characteristics. The study was conducted in a specific educational setting with a particular group of EFL learners, which may restrict the broader applicability of the findings. Future research should aim to replicate the study in diverse settings, such as different educational institutions or regions, and with larger sample sizes to enhance the external validity of the findings. By doing so, researchers can better assess the robustness and transferability of the study’s conclusions to a wider population of EFL learners. Additionally, the study’s focus on specific technological interventions, such as chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion, may limit the exploration of other emerging technologies that could also contribute to language learning proficiency and self-regulated learning skills. While these interventions were central to the study’s objectives, future research could explore a broader range of technological interventions, including mobile applications, gamified learning platforms, and natural language processing tools, to provide a comprehensive understanding of their impact on EFL language learning outcomes. By broadening the scope of technological interventions under investigation, researchers can gain deeper insights into the potential benefits and limitations of various technology-enhanced learning tools in the context of EFL education.
Building on the findings and limitations of this study, future research endeavors could explore the long-term effects of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion on EFL learners’ language proficiency and self-regulated learning skills. Longitudinal studies could provide insights into the sustained impact of these technological interventions on language acquisition and autonomous learning behaviors over extended periods. Furthermore, comparative studies that examine the effectiveness of different combinations of technological interventions could contribute to a deeper understanding of their relative impact on language learning outcomes. Exploring the synergistic effects of integrating multiple technological interventions could inform the development of comprehensive and adaptive language education programs for EFL learners. Finally, research focusing on the development of guidelines and best practices for integrating technological interventions into language education could provide valuable resources for educators and curriculum developers. By establishing evidence-based guidelines, future research can support the effective implementation of technology-enhanced language education initiatives, ultimately benefiting EFL learners and educators alike.
6. Conclusions
In summary, this research has yielded valuable insights into the influence of chatbot-based language learning support, adaptive learning algorithms, and virtual reality language immersion on the language learning proficiency and self-regulated learning skills of EFL learners. The results indicate the potential of these technological interventions to improve language learning outcomes and encourage independent learning behaviors among EFL learners. The implications for language educators, curriculum developers, and policymakers underscore the transformative capacity of integrating innovative technological approaches into language education. While the study’s limitations underscore the necessity for broader exploration and replication in diverse contexts, future research efforts could further elucidate the long-term effects of these technological interventions and contribute to the development of evidence-based guidelines for their effective integration into language education. By continuing to explore and refine the implementation of technology-enhanced language education initiatives, educators and policymakers can strive to optimize language learning outcomes for EFL learners in the digital age. The findings of this study enrich the ongoing discourse on technology-enhanced language education and emphasize the need for sustained research and innovation in this field. Ultimately, the integration of innovative technological approaches has the potential to revolutionize language education practices and address the evolving needs of EFL learners, paving the way for enhanced language learning experiences in the future.
Author Contributions
All authors have approved the submitted version. Conceptualization, AB; Methodology, AB; Software, AB; Validation, AB, MS and HS; Original Formal Analysis, AB.; Investigation, AB; Resources, AB; Data Curation, AB; Writing – Original Draft Preparation, AB; Writing – Review & Editing, MS and HS.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Self-regulated Learning Scale Items and Descriptions
Item |
Focus |
Statement |
M |
SD |
1 |
Goal Setting |
I set clear and achievable learning goals for myself. |
3.76 |
1.244 |
2 |
Time Management |
I effectively manage my time when engaging in language learning activities. |
3.77 |
1.259 |
3 |
Self-Monitoring |
I regularly monitor my progress and adjust my learning strategies accordingly. |
3.80 |
1.267 |
4 |
Adaptive Learning Strategies |
I adapt my learning approach based on my understanding of the language material. |
6.89 |
.314 |
5 |
Goal Setting |
I set specific targets for improving my language skills. |
5.74 |
.750 |
6 |
Time Management |
I allocate my study time efficiently to cover different language learning tasks. |
6.29 |
.472 |
7 |
Self-Monitoring |
I keep track of my language learning progress and make changes as needed. |
6.89 |
.314 |
8 |
Adaptive Learning Strategies |
I modify my learning methods to better suit my language learning needs. |
5.74 |
.750 |
9 |
Goal Setting |
I establish measurable objectives to enhance my language proficiency. |
6.29 |
.472 |
Appendix B. Perceptions of language learning support and immersion technologies
No. |
Question/Item |
Focus |
M |
SD |
1. |
I believe that the chatbot-based language learning support facilitated my language skills by providing personalized language practice and feedback. |
Chatbot-Based Support |
5.89 |
.313 |
2. |
I believe that the chatbot-based language learning support facilitated my self-regulated learning by offering interactive and real-time language support. |
Chatbot-Based Support |
5.45 |
.327 |
3. |
I believe that the adaptive learning algorithms enhanced my language learning experience by adapting to my learning pace and providing targeted exercises. |
Adaptive Learning Algorithms |
5.67 |
.320 |
4. |
I believe that the adaptive learning algorithms enhanced my self-regulated learning by tracking my progress and offering tailored learning resources. |
Adaptive Learning Algorithms |
5.89 |
.313 |
5. |
I believe that the virtual reality language immersion improved my language skills by creating immersive and interactive language learning environments. |
Virtual Reality Immersion |
5.45 |
.327 |
6. |
I believe that the virtual reality language immersion improved my self-regulated learning by providing realistic and engaging language learning scenarios. |
Virtual Reality Immersion |
5.67 |
.320 |
7. |
I believe that the chatbot-based language learning support enhanced my vocabulary acquisition by providing targeted word practice and explanations. |
Chatbot-Based Support |
5.89 |
.313 |
8. |
I believe that the adaptive learning algorithms improved my language comprehension by adjusting the difficulty of learning materials based on my performance. |
Adaptive Learning Algorithms |
5.45 |
.327 |
9. |
I believe that the virtual reality language immersion enriched my cultural understanding by simulating authentic language and cultural experiences. |
Virtual Reality Immersion |
5.67 |
.320 |
10. |
I believe that the chatbot-based language learning support increased my motivation to learn by offering engaging and interactive learning activities. |
Chatbot-Based Support |
5.89 |
.313 |
11. |
I believe that the adaptive learning algorithms promoted my autonomy in learning by providing opportunities for self-assessment and self-directed learning. |
Adaptive Learning Algorithms |
5.45 |
.327 |
12. |
I believe that the virtual reality language immersion enhanced my language fluency by creating opportunities for real-time language use and communication. |
Virtual Reality Immersion |
5.67 |
.320 |
Appendix C. Interview questions and focus
No. |
Question/Item |
Focus |
M |
SD |
1. |
In your experience, did the chatbot-based language learning support contribute to your language skills? |
Chatbot-Based Support |
3.94 |
.254 |
2. |
Can you share your perspective on how the chatbot-based language learning support impacted your self-regulated learning? |
Chatbot-Based Support |
3.65 |
.288 |
3. |
How would you rate the effectiveness of the adaptive learning algorithms in enhancing your language learning experience? |
Adaptive Learning Algorithms |
3.94 |
.254 |
4. |
From your experience, how did the adaptive learning algorithms contribute to your self-regulated learning? |
Adaptive Learning Algorithms |
3.65 |
.288 |
5. |
Can you assess the impact of virtual reality language immersion on improving your language skills? |
Virtual Reality Immersion |
3.94 |
.254 |
6. |
How would you rate the effectiveness of virtual reality language immersion in enhancing your self-regulated learning? |
Virtual Reality Immersion |
3.65 |
.288 |
7. |
Based on your experience, how effective was the chatbot-based language learning support in enhancing your vocabulary acquisition? |
Chatbot-Based Support |
3.94 |
.254 |
8. |
How do you assess the impact of the adaptive learning algorithms on improving your language comprehension? |
Adaptive Learning Algorithms |
3.65 |
.288 |
9. |
From your perspective, how did virtual reality language immersion contribute to enriching your cultural understanding? |
Virtual Reality Immersion |
3.94 |
.254 |
10. |
Can you evaluate whether the chatbot-based language learning support increased your motivation to learn? |
Chatbot-Based Support |
3.45 |
.2898 |
References
- Kondurkar, I., Raj, A., & Lakshmi, D. (2024). Modern Applications With a Focus on Training ChatGPT and GPT Models: Exploring Generative AI and NLP. In A. Obaid, B. Bhushan, M. S., & S. Rajest (Eds.), Advanced Applications of Generative AI and Natural Language Processing Models (pp. 186-227). IGI Global. [CrossRef]
- Tyagi, A. K. (2024). Transformative Effects of ChatGPT on the Modern Era of Education and Society: From Society’s and Industry’s Perspectives. In P. Baby Maruthi, S. Prasad, & A. Tyagi (Eds.), Machine Learning Algorithms Using Scikit and TensorFlow Environments (pp. 374-387). IGI Global. [CrossRef]
- Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A systematic review. Education and Information Technologies, 28(1), 973-1018. [CrossRef]
- Al-Abdullatif, A. M., Al-Dokhny, A. A., & Drwish, A. M. (2023). Implementing the Bashayer chatbot in Saudi higher education: measuring the influence on students’ motivation and learning strategies. Frontiers in Psychology, 14, 1129070. [CrossRef]
- Deng, X., & Yu, Z. (2023). A meta-analysis and systematic review of the effect of chatbot technology use in sustainable education. Sustainability, 15(4), 2940. [CrossRef]
- Osadcha, K., Osadchyi, V., Semerikov, S., Chemerys, H., & Chorna, A. V. (2020). The review of the adaptive learning systems for the formation of individual educational trajectory. In CEUR Workshop Proceedings 2732, 547-558.https://elibrary.kdpu.edu.ua/handle/123456789/4130.
- Yang, F.. C. O., Lo, F. Y. R., Hsieh, J. C.,& Wu, W. C. V. (2020). Facilitating communicative ability of EFL learners via high-immersion virtual reality. Journal of Educational Technology & Society, 23(1), 30-49.https://www.jstor.org/stable/26915405.
- Ciekanski, M., Kalyaniwala, C., Molle, N., & Privas-Bréauté, V. (2020). Real and perceived affordances of Immersive Virtual Environments in a language teacher-training context: effects on the design of learning tasks. Revista Docência e Cibercultura, 4(3), 83-111. [CrossRef]
- Grassini, S. (2023). Shaping the future of education: exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences, 13(7), 692. [CrossRef]
- Zhang, R., Zou, D., & Cheng, G. (2023). A review of chatbot-assisted learning: pedagogical approaches, implementations, factors leading to effectiveness, theories, and future directions. Interactive Learning Environments, 1-29. [CrossRef]
- Merelo, J.J. et al. (2022). Exploring the Role of Chatbots and Messaging Applications in Higher Education: A Teacher’s Perspective. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Novel Technological Environments. HCII 2022. Lecture Notes in Computer Science, vol 13329. Springer, Cham. [CrossRef]
- Ramandanis, D., & Xinogalos, S. (2023). Investigating the Support Provided by Chatbots to Educational Institutions and Their Students: A Systematic Literature Review. Multimodal Technologies and Interaction, 7(11), 103. [CrossRef]
- Yang, Y., Wen, Y., & Song, Y. (2023). A systematic review of technology-enhanced self-regulated language learning. Educational Technology & Society, 26(1), 31-44. https://www.jstor.org/stable/48707965.
- Zhao, Y., & Lai, C. (2023). Technology and second language learning: Promises and problems. In Y. Zhao & C. Lai (Eds) Technology-mediated learning environments for young English learners (pp. 167-206). London: Routledge. [CrossRef]
- Wu, J.G., Zhang, D. & Lee, S.M. (2024) Into the Brave New Metaverse: Envisaging Future Language Teaching and Learning. in IEEE Transactions on Learning Technologies, 17, 44-53. [CrossRef]
- Authors et al. (2023).
- Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive science, 12(2), 257-285. [CrossRef]
- Dix, A., Finlay, J., Abowd, G. & Beale, R. (2004). Human-computer interaction. Harlow: Pearson Education.
- Author (2023).
- Authors et al. (2022).
- Jia, C., & Hew, K. F. (2023). Meeting the challenges of decoding training in English as a foreign/second language listening education: current status and opportunities for technology-assisted decoding training. Computer assisted language learning, 36(5-6), 1116-1145. [CrossRef]
- Chen, J. C., & Kent, S. (2020). Task engagement, learner motivation and avatar identities of struggling English language learners in the 3D virtual world. System, 88, 102168. [CrossRef]
- Suzuki, Y. (Ed.). (2023). Practice and automatization in second language research: Perspectives from skill acquisition theory and cognitive psychology. New York: Routledge.
- Han, S., Liu, M., Pan, Z., Cai, Y., & Shao, P. (2023). Making FAQ chatbots more Inclusive: an examination of non-native English users’ interactions with new technology in massive open online courses. International Journal of Artificial Intelligence in Education, 33(3), 752-780. [CrossRef]
- Kerimbayev, N., Umirzakova, Z., Shadiev, R., & Jotsov, V. (2023). A student-centered approach using modern technologies in distance learning: a systematic review of the literature. Smart Learning Environments, 10(1), 61. [CrossRef]
- Song, C., Shin, S. Y., & Shin, K. S. (2023). Optimizing Foreign Language Learning in Virtual Reality: A Comprehensive Theoretical Framework Based on Constructivism and Cognitive Load Theory (VR-CCL). Applied Sciences, 13(23), 12557. [CrossRef]
- Şahin Kızıl, A., & Savran, Z. (2018). Assessing self-regulated learning: The case of vocabulary learning through information and communication technologies. Computer Assisted Language Learning, 31(5-6), 599-616. [CrossRef]
- Miller Jr, R. G. (1997). Beyond ANOVA: basics of applied statistics. Florida: CRC press.
- Lawless, B., & Chen, Y. W. (2019). Developing a method of critical thematic analysis for qualitative communication inquiry. Howard Journal of Communications, 30(1), 92-106. [CrossRef]
- Silitonga, L. M., Hawanti, S., Aziez, F., Furqon, M., Zain, D. S. M., Anjarani, S., & Wu, T. T. (2023, August). The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom. In International Conference on Innovative Technologies and Learning (pp. 542-549). Cham: Springer Nature Switzerland. [CrossRef]
- Chiu, T. K., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2023). Teacher support and student motivation to learn with Artificial Intelligence (AI) based chatbot. Interactive Learning Environments, 1-17. [CrossRef]
- Ait Baha, T., El Hajji, M., Es-Saady, Y., & Fadili, H. (2023). The impact of educational chatbot on student learning experience. Education and Information Technologies, 1-24. [CrossRef]
- Shaikh, S., Yayilgan, S. Y., Klimova, B., & Pikhart, M. (2023). Assessing the usability of ChatGPT for formal English language learning. European Journal of Investigation in Health, Psychology and Education, 13(9), 1937-1960. [CrossRef]
- Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. Journal of Computer Assisted Learning, 38(1), 237-257. [CrossRef]
- Kohnke, L. (2023). A pedagogical chatbot: A supplemental language learning tool. RELC Journal, 54(3), 828-838. [CrossRef]
Table 1.
Descriptive Statistics of Language Proficiency and Self-Regulated Learning Measures Across Experimental Groups.
Table 1.
Descriptive Statistics of Language Proficiency and Self-Regulated Learning Measures Across Experimental Groups.
Descriptives |
|
N |
Mean |
Std. Deviation |
Std. Error |
95% Confidence Interval for Mean |
Minimum |
Maximum |
Lower Bound |
Upper Bound |
Language Proficiency Pretest |
Group 1 |
136 |
48.03 |
5.074 |
.435 |
47.17 |
48.89 |
39 |
58 |
Group 2 |
136 |
47.81 |
5.165 |
.443 |
46.93 |
48.68 |
39 |
58 |
Group 3 |
137 |
48.07 |
5.039 |
.431 |
47.22 |
48.92 |
39 |
58 |
Group 4 |
137 |
48.28 |
5.033 |
.430 |
47.43 |
49.13 |
39 |
58 |
Total |
546 |
48.05 |
5.067 |
.217 |
47.62 |
48.47 |
39 |
58 |
Language Proficiency Posttest |
Group 1 |
136 |
93.39 |
5.492 |
.471 |
92.46 |
94.32 |
76 |
100 |
Group 2 |
136 |
70.74 |
4.566 |
.392 |
69.96 |
71.51 |
66 |
89 |
Group 3 |
137 |
70.83 |
4.362 |
.373 |
70.10 |
71.57 |
66 |
89 |
Group 4 |
137 |
47.93 |
4.810 |
.411 |
47.12 |
48.75 |
39 |
58 |
Total |
546 |
70.68 |
16.790 |
.719 |
69.27 |
72.09 |
39 |
100 |
Self-regulated learning scale pretest |
Group 1 |
136 |
3.76 |
1.244 |
.107 |
3.55 |
3.97 |
1 |
6 |
Group 2 |
136 |
3.77 |
1.259 |
.108 |
3.56 |
3.99 |
1 |
6 |
Group 3 |
137 |
3.80 |
1.267 |
.108 |
3.58 |
4.01 |
1 |
6 |
Group 4 |
137 |
3.91 |
1.228 |
.105 |
3.71 |
4.12 |
1 |
6 |
Total |
546 |
3.81 |
1.248 |
.053 |
3.70 |
3.91 |
1 |
6 |
Self-regulated learning scale posttest |
Group 1 |
136 |
6.89 |
.314 |
.027 |
6.84 |
6.94 |
6 |
7 |
Group 2 |
136 |
5.74 |
.750 |
.064 |
5.62 |
5.87 |
4 |
7 |
Group 3 |
137 |
6.29 |
.472 |
.040 |
6.21 |
6.37 |
5 |
7 |
Group 4 |
137 |
3.74 |
1.213 |
.104 |
3.54 |
3.95 |
1 |
6 |
Total |
546 |
5.66 |
1.410 |
.060 |
5.55 |
5.78 |
1 |
7 |
Table 2.
Analysis of Variance Results for Language Proficiency and Self-Regulated Learning Measures in EFL Learners.
Table 2.
Analysis of Variance Results for Language Proficiency and Self-Regulated Learning Measures in EFL Learners.
ANOVA |
|
Sum of Squares |
df |
Mean Square |
F |
Sig. |
Language Proficiency Pretest |
Between Groups |
15.120 |
3 |
5.040 |
.195 |
.899 |
Within Groups |
13975.642 |
542 |
25.785 |
|
|
Total |
13990.762 |
545 |
|
|
|
Language Proficiency Posttest |
Between Groups |
141022.186 |
3 |
47007.395 |
2018.801 |
.000 |
Within Groups |
12620.364 |
542 |
23.285 |
|
|
Total |
153642.549 |
545 |
|
|
|
Self-regulated learning scale pretest |
Between Groups |
2.038 |
3 |
.679 |
.435 |
.728 |
Within Groups |
846.153 |
542 |
1.561 |
|
|
Total |
848.190 |
545 |
|
|
|
Self-regulated learning scale posttest |
Between Groups |
763.947 |
3 |
254.649 |
431.692 |
.000 |
Within Groups |
319.718 |
542 |
.590 |
|
|
Total |
1083.665 |
545 |
|
|
|
Table 3.
ANOVA Effect Sizes for Language Proficiency and Self-Regulated Learning Measures in EFL Learners.
Table 3.
ANOVA Effect Sizes for Language Proficiency and Self-Regulated Learning Measures in EFL Learners.
ANOVA Effect Sizesa,b
|
|
Point Estimate |
95% Confidence Interval |
Lower |
Upper |
Language Proficiency Pretest |
Eta-squared |
.001 |
.000 |
.006 |
Epsilon-squared |
-.004 |
-.006 |
.000 |
Omega-squared Fixed-effect |
-.004 |
-.006 |
.000 |
Omega-squared Random-effect |
-.001 |
-.002 |
.000 |
Language Proficiency Posttest |
Eta-squared |
.918 |
.907 |
.926 |
Epsilon-squared |
.917 |
.906 |
.926 |
Omega-squared Fixed-effect |
.917 |
.906 |
.926 |
Omega-squared Random-effect |
.787 |
.762 |
.806 |
Self-regulated learning scale pretest |
Eta-squared |
.002 |
.000 |
.011 |
Epsilon-squared |
-.003 |
-.006 |
.006 |
Omega-squared Fixed-effect |
-.003 |
-.006 |
.006 |
Omega-squared Random-effect |
-.001 |
-.002 |
.002 |
Self-regulated learning scale posttest |
Eta-squared |
.705 |
.667 |
.734 |
Epsilon-squared |
.703 |
.665 |
.733 |
Omega-squared Fixed-effect |
.703 |
.664 |
.733 |
Omega-squared Random-effect |
.441 |
.398 |
.477 |
a. Eta-squared and Epsilon-squared are estimated based on the fixed-effect model. |
b. Negative but less biased estimates are retained, not rounded to zero. |
Table 4.
Multiple Comparisons of Language Proficiency and Self-Regulated Learning Measures in EFL Learners Using Tukey’s Honestly Significant Difference (HSD) Test.
Table 4.
Multiple Comparisons of Language Proficiency and Self-Regulated Learning Measures in EFL Learners Using Tukey’s Honestly Significant Difference (HSD) Test.
Multiple Comparisons |
Tukey HSD |
Dependent Variable |
(I) group |
(J) group |
Mean Difference (I-J) |
Std. Error |
Sig. |
95% Confidence Interval |
Lower Bound |
Upper Bound |
Language Proficiency Pretest |
Group 1 |
Group 2 |
.221 |
.616 |
.984 |
-1.37 |
1.81 |
Group 3 |
-.044 |
.615 |
1.000 |
-1.63 |
1.54 |
Group 4 |
-.248 |
.615 |
.978 |
-1.83 |
1.34 |
Group 2 |
Group 1 |
-.221 |
.616 |
.984 |
-1.81 |
1.37 |
Group 3 |
-.264 |
.615 |
.973 |
-1.85 |
1.32 |
Group 4 |
-.469 |
.615 |
.871 |
-2.05 |
1.12 |
Group 3 |
Group 1 |
.044 |
.615 |
1.000 |
-1.54 |
1.63 |
Group 2 |
.264 |
.615 |
.973 |
-1.32 |
1.85 |
Group 4 |
-.204 |
.614 |
.987 |
-1.79 |
1.38 |
Group 4 |
Group 1 |
.248 |
.615 |
.978 |
-1.34 |
1.83 |
Group 2 |
.469 |
.615 |
.871 |
-1.12 |
2.05 |
Group 3 |
.204 |
.614 |
.987 |
-1.38 |
1.79 |
Language Proficiency Posttest |
Group 1 |
Group 2 |
22.654*
|
.585 |
.000 |
21.15 |
24.16 |
Group 3 |
22.558*
|
.584 |
.000 |
21.05 |
24.06 |
Group 4 |
45.455*
|
.584 |
.000 |
43.95 |
46.96 |
Group 2 |
Group 1 |
-22.654*
|
.585 |
.000 |
-24.16 |
-21.15 |
Group 3 |
-.097 |
.584 |
.998 |
-1.60 |
1.41 |
Group 4 |
22.801*
|
.584 |
.000 |
21.30 |
24.31 |
Group 3 |
Group 1 |
-22.558*
|
.584 |
.000 |
-24.06 |
-21.05 |
Group 2 |
.097 |
.584 |
.998 |
-1.41 |
1.60 |
Group 4 |
22.898*
|
.583 |
.000 |
21.40 |
24.40 |
Group 4 |
Group 1 |
-45.455*
|
.584 |
.000 |
-46.96 |
-43.95 |
Group 2 |
-22.801*
|
.584 |
.000 |
-24.31 |
-21.30 |
Group 3 |
-22.898*
|
.583 |
.000 |
-24.40 |
-21.40 |
Self-regulated learning scale pretest |
Group 1 |
Group 2 |
-.015 |
.152 |
1.000 |
-.41 |
.38 |
Group 3 |
-.038 |
.151 |
.994 |
-.43 |
.35 |
Group 4 |
-.155 |
.151 |
.735 |
-.54 |
.23 |
Group 2 |
Group 1 |
.015 |
.152 |
1.000 |
-.38 |
.41 |
Group 3 |
-.024 |
.151 |
.999 |
-.41 |
.37 |
Group 4 |
-.140 |
.151 |
.790 |
-.53 |
.25 |
Group 3 |
Group 1 |
.038 |
.151 |
.994 |
-.35 |
.43 |
Group 2 |
.024 |
.151 |
.999 |
-.37 |
.41 |
Group 4 |
-.117 |
.151 |
.866 |
-.51 |
.27 |
Group 4 |
Group 1 |
.155 |
.151 |
.735 |
-.23 |
.54 |
Group 2 |
.140 |
.151 |
.790 |
-.25 |
.53 |
Group 3 |
.117 |
.151 |
.866 |
-.27 |
.51 |
Self-regulated learning scale posttest |
Group 1 |
Group 2 |
1.147*
|
.093 |
.000 |
.91 |
1.39 |
Group 3 |
.598*
|
.093 |
.000 |
.36 |
.84 |
Group 4 |
3.145*
|
.093 |
.000 |
2.91 |
3.38 |
Group 2 |
Group 1 |
-1.147*
|
.093 |
.000 |
-1.39 |
-.91 |
Group 3 |
-.549*
|
.093 |
.000 |
-.79 |
-.31 |
Group 4 |
1.998*
|
.093 |
.000 |
1.76 |
2.24 |
Group 3 |
Group 1 |
-.598*
|
.093 |
.000 |
-.84 |
-.36 |
Group 2 |
.549*
|
.093 |
.000 |
.31 |
.79 |
Group 4 |
2.547*
|
.093 |
.000 |
2.31 |
2.79 |
Group 4 |
Group 1 |
-3.145*
|
.093 |
.000 |
-3.38 |
-2.91 |
Group 2 |
-1.998*
|
.093 |
.000 |
-2.24 |
-1.76 |
Group 3 |
-2.547*
|
.093 |
.000 |
-2.79 |
-2.31 |
*. The mean difference is significant at the 0.05 level. |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).