Preprint
Article

ChatGPT Impact on EFL Indian Undergraduates

Altmetrics

Downloads

312

Views

130

Comments

0

This version is not peer-reviewed

Submitted:

14 May 2024

Posted:

23 May 2024

You are already at the latest version

Alerts
Abstract
This study investigates the impact of integrating ChatGPT on the academic writing performance and engagement of English as a Foreign Language (EFL) undergraduate students in Delhi, India, during the first semester of the academic year 2024/2025. It addresses the challenges within the Indian education system by exploring the effectiveness of AI-driven writing tools in enhancing writing skills and motivation among EFL learners. Employing a mixed-methods approach, the research evaluates ChatGPT's influence on 32 EFL Indian undergraduate students through draft revisions, questionnaires, and interviews. The findings highlight ChatGPT's positive impact on writing quality, engagement levels, and immediate feedback appreciation, emphasizing its potential in improving writing instruction and student outcomes in EFL contexts. Recommendations are provided for practitioners to integrate ChatGPT in writing exercises and for researchers to conduct longitudinal studies and explore ethical considerations in AI use for educational purposes, ultimately contributing to enhanced EFL education and technology-mediated learning.
Keywords: 
Subject: Arts and Humanities  -   Humanities

Introduction

In the ever-evolving realm of English as a Foreign Language (EFL) education, a web of interconnected challenges continues to exert significant influence on writing pedagogy, student engagement, and the adoption of advanced technologies like ChatGPT (Bašić et al., 2023; Dergaa et al., 2023; Song & Song, 2023). Central to these challenges is the persistent issue of limited access to personalized feedback, a crucial element in honing writing skills (Aljanabi et al., 2023; Herbold et al., 2023). The absence of timely and tailored feedback often leaves students grappling with identifying and rectifying writing errors, ultimately hindering their overall writing proficiency (Cooper, 2023; Halaweh, 2023; Yan, 2023a; Zhai, 2023). Furthermore, the daunting task of self-editing and revising drafts remains a hurdle for many EFL learners due to the absence of effective strategies and guidance, further impeding their writing progress. Alongside these challenges is the pervasive issue of low motivation and engagement levels among students in writing tasks, often stemming from uninspiring teaching methods, irrelevant assignments, and inadequate feedback mechanisms, collectively contributing to subpar learning experiences and outcomes.
Existing research in EFL education and writing instruction has made notable progress in identifying and tackling some of these challenges. Studies have explored the effectiveness of AI-driven writing tools like ChatGPT in providing feedback and generating content (Baidoo-Anu & Owusu Ansah, 2023; Hassani & Silva, 2023; Lund et al., 2023; Schmidt-Fajlik, 2023). However, these studies sometimes fall short in providing comprehensive insights into critical aspects such as improving critical writing skills, fostering engagement, and enhancing overall writing outcomes among EFL learners (Duha, 2023; Javaid et al., 2023; Koraishi, 2023; Liu, 2023; Yan, 2023b). This underscores a crucial research gap, particularly concerning the tailored application and impact of AI tools like ChatGPT within the Indian undergraduate EFL context. This gap emphasizes the urgent need for empirical studies that not only gauge the effectiveness of AI tools but also delve deeper into their nuanced effects on student engagement, writing proficiency, and autonomous learning. By highlighting these gaps, this study aims to offer fresh insights and much-needed evidence to inform effective interventions and pedagogical strategies.
Drawing from the existing literature, this study embarks on a comprehensive exploration of ChatGPT's integration as a potential solution to enhance writing skills and elevate engagement levels among Indian undergraduate EFL students. The study seeks to bridge critical gaps in the literature and provide actionable insights for educators and policymakers. Moreover, the timeliness of this study arises from the imperative to harness innovative technologies effectively to meet the evolving needs of EFL learners, particularly within the Indian educational landscape. To achieve its goals, the study poses two key research questions:
  • How do EFL students perceive the impact of ChatGPT integration on their academic writing performance?
  • To what extent does ChatGPT integration influence student engagement levels in academic writing tasks among EFL students?
This study acts as a conduit in addressing identified gaps by examining ChatGPT's impact on enhancing writing skills and fostering engagement among EFL learners. By tackling these specific challenges prevalent in EFL education, this study not only fills crucial gaps but also presents actionable insights and evidence-based recommendations. Through its methodology, this study aspires to make a substantial contribution to the advancement of EFL education practices, not only in India but also globally, promoting continual improvement and innovation in language education.

Literature Review

Problems in Efl Education and Students' Writing Performance

The realm of EFL education grapples with a multitude of challenges that wield significant influence over students' writing prowess. A pivotal concern revolves around the constrained availability of tailored feedback and instructional guidance in writing (Golparvar & Khafi, 2021; Kuyyogsuy, 2019). Traditional evaluation methods often result in prolonged feedback cycles, impeding students' capacity to refine and elevate their writing aptitude effectively (Chen & Zhang, 2019; Lu, 2021). This dearth of feedback mechanisms stymies students' advancement in honing expertise in academic writing, especially in environments where personalized writing aid is sparse, such as Indian undergraduate education. Moreover, EFL learners grapple with challenges in autonomously editing and revising their drafts (El-Maghraby, 2021; Teng & Zhang, 2020). The absence of structured methodologies and systematic feedback loops contributes to subpar writing outcomes. These hurdles are compounded by limited exposure to authentic writing tasks and inadequate support in mastering writing norms and coherence, further impeding students' strides toward producing top-notch academic compositions. Extensive studies have spotlighted these challenges in EFL education. For instance, research by Wu (2019) accentuates the pivotal role of timely and precise feedback in augmenting students' writing competencies. Nevertheless, traditional assessment methods often falter in delivering such feedback promptly, resulting in developmental gaps in students' writing journey. Similarly, studies by Inan-Karagul and Seker (2021) and Reynolds et al. (2021) underscore the significance of structured strategies in self-editing and revision, alongside the necessity for explicit guidance on writing norms and coherence. The existing reservoir of knowledge in EFL education underscores the pressing demand for inventive solutions and technological interventions to effectively tackle these challenges (El-Maghraby, 2021; Natalia et al., 2018; Roohani & Shafiee Rad, 2022). While some studies have delved into integrating AI-driven tools like ChatGPT, there remains a scarcity of empirical research comprehensively evaluating their impact on ameliorating students' writing performance, especially within Indian undergraduate settings.

Problems in Efl Engagement in Writing

Effectively engaging EFL learners in writing tasks stands out as a formidable challenge. Conventional teaching methods often struggle to maintain students’ interest and motivation throughout the writing process, resulting in disengagement and subpar writing outcomes (Reynolds et al., 2021; Wu, 2019). Insufficient avenues for peer feedback, collaborative writing endeavors, and real-world writing exposure further dampen engagement levels among EFL students, especially in academic scenarios (Inan-Karagul & Seker, 2021; Kuyyogsuy, 2019). Research indicates that integrating technology-enhanced learning methods, such as AI-driven writing tools like ChatGPT, could bolster student engagement in writing tasks (Bašić et al., 2023; Baskara, 2023; Yan, 2023a; Zheng & Zhan, 2023). However, understanding the specific impact of these tools on enhancing engagement levels and sustaining student motivation necessitates thorough investigation and empirical validation within the Indian EFL education framework. Earlier studies have also highlighted the complexities of EFL engagement in writing. For example, research by Dergaa et al. (2023) and Steiss et al. (2024) underscores the need to create interactive and dynamic writing environments that ignite students’ interest and promote collaboration. Additionally, studies by Baskara (2023) and Yan (2023b) stress the importance of offering ample opportunities for peer feedback and real-world writing encounters to boost student engagement and motivation. Despite these insights, there exists a gap in the literature regarding a comprehensive understanding of how AI-driven tools like ChatGPT can effectively enhance EFL engagement in writing, particularly within Indian educational settings.

Potential of Chatgpt and its Challenges in Academic Writing

The potential of ChatGPT in the realm of academic writing is an immensely intriguing and significant topic. As an AI-driven writing assistance tool, ChatGPT shows promise in tackling the challenges encountered by EFL learners in academic writing tasks (e.g., Ariyaratne et al., 2023; Bašić et al., 2023; Herbold et al., 2023). Its natural language processing capabilities are crucial in facilitating immediate feedback, language refinement, and coherence enhancement—key elements of effective writing instruction. Furthermore, ChatGPT's capacity to offer personalized learning experiences and opportunities for self-directed writing practice substantially contributes to improving EFL students’ writing proficiency and engagement levels (Baskara, 2023; Yan, 2023a; Zheng & Zhan, 2023). Nevertheless, integrating ChatGPT into academic writing contexts comes with challenges. Previous studies by Adiguzel et al. (2023) and Steiss et al. (2024) have explored the potential benefits of incorporating ChatGPT into academic writing environments. Baskara and Mukarto (2023) highlighted ChatGPT’s features such as instant feedback and language refinement, while Liu (2023) and Yan (2023b) emphasized its role in nurturing personalized learning experiences and encouraging autonomous writing practice among students. These studies offer valuable insights into how ChatGPT can support students’ writing development. Despite its advantages, significant challenges remain associated with AI-driven tools like ChatGPT. These challenges highlight the urgent need for further research and development to effectively address issues such as privacy, data security, and ethical concerns within the context of academic writing instruction. Therefore, this study aims to build upon existing knowledge by conducting a comprehensive investigation into ChatGPT’s efficacy in addressing privacy, data security, and ethical considerations specific to the Indian undergraduate education setting. This endeavor will contribute to a more nuanced understanding of ChatGPT’s implications in academic writing education and pave the way for informed integration and utilization of AI technologies in educational settings.

Methodology

Research Design

This study adopts a mixed-methods research design to conduct a comprehensive evaluation of the impact of integrating ChatGPT on the academic writing performance and engagement of 32 EFL undergraduate students at Delhi, India. The use of a mixed-methods approach enables the triangulation of data from both quantitative and qualitative sources, facilitating a more robust and nuanced understanding of the research questions at hand.

Participants

The participants for this study are meticulously selected based on specific inclusion criteria. They are currently enrolled in the Academic Writing course during the first semester of the academic year 2024/2025, ensuring their active involvement in academic writing tasks throughout the research duration. The inclusion criteria also take into account the participants' diverse levels of English proficiency, reflecting the typical variability observed in EFL student populations.
Table 1 presents a detailed breakdown of the demographic characteristics of the participants, providing essential insights into their backgrounds and ensuring a representative sample for the study.
The demographic breakdown provided in Table 1 offers insightful glimpses into the makeup of the participant group, crucial for gauging the potential impact of integrating ChatGPT on student academic writing and engagement. Predominantly, the age distribution reveals that 26 out of 32 students (81.25%) fall within the 18-24 bracket, mirroring the typical profile of undergraduates engaged in refining their academic writing skills. In contrast, the 25-34 age group comprises 6 students (18.75%), highlighting a youthful cohort's emphasis in this study. Notably, there are no participants in the 35+ age category, underscoring the focus on a specific demographic segment. Gender-wise, females are slightly more represented, with 20 female students (62.50%) compared to 12 males (37.50%). This gender dynamic could influence engagement levels and writing practices, a nuance critical for understanding student interactions with academic tasks. Regarding English proficiency, 18 students (56.25%) are beginners, whereas 14 (43.75%) are at an intermediate level, indicative of a sample geared toward foundational and intermediate writing skills development. The absence of advanced-level students in this study group suggests a deliberate exploration of ChatGPT’s efficacy across varying proficiency tiers, accentuating its potential in enhancing academic writing skills holistically. Considering these demographic facets ensures a well-rounded examination of ChatGPT’s effectiveness within diverse EFL settings at Delhi, India, offering valuable insights into its pedagogical implications.

Instruments

A meticulously crafted survey, derived from Liu (2023) and Zhai (2023), plays a pivotal role in the data gathering endeavor. Tailored to gauge students’ viewpoints and encounters concerning academic writing prowess, engagement levels, and perspectives on technology integration, this survey is divided into three distinct dimensions. Each dimension comprises 10 items rated on a Likert scale from 1 to 5, elucidating varying degrees of agreement ranging from "strongly disagree" to "strongly agree." These dimensions encompass Writing Proficiency (WP), Engagement Factors (EF), and Attitudes Towards Technology (ATT). WP scrutinizes coherence, organization, linguistic adeptness, and other facets pertinent to writing proficiency. EF delves into motivation, interest, and the perceived utility of AI-driven writing aids like ChatGPT. Meanwhile, ATT investigates students’ stances regarding technology's integration into learning. This survey stands as an invaluable instrument for quantitatively gauging students’ outlooks and standpoints concerning academic writing and the amalgamation of technology. The survey's reliability, gauged via Cronbach’s alpha coefficient, manifests as 0.912 for WP, 0.896 for EF, and 0.906 for ATT, signifying robust internal consistency and dependability in capturing the intended constructs. Furthermore, drafts of academic writing tasks serve as tangible benchmarks for participants’ writing advancements, evaluated meticulously through qualitative rubrics founded on established criteria. These rubrics comprehensively evaluate writing elements such as content relevance, structural coherence, argumentative lucidity, linguistic adeptness, and adherence to conventions. Through systematic analysis, profound insights into participants’ strengths and areas for enhancement emerge, ensuring uniform and impartial evaluations. Additionally, semi-structured interviews conducted with a subset of participants deepen insights into their interactions with ChatGPT and other AI writing tools, unraveling perspectives on technology, ChatGPT’s impact on writing workflows, and suggestions for augmenting EFL education support. These interviews’ findings complement quantitative data, enriching the overall comprehension of the study.

Data Collection Procedure

The methodology employed for this research is intricately crafted to capture nuanced perspectives on participants’ writing abilities, engagement dynamics, and perceptions regarding ChatGPT. The process begins with the recruitment of 32 EFL undergraduate students from Delhi, India, who are currently enrolled in the Academic Writing course during the first semester of the academic year 2024/2025. A comprehensive orientation session familiarizes participants with the study’s aims, methodologies, potential risks, and advantages. Following this, informed consent is obtained from each participant prior to their active involvement in the research. Next, a questionnaire is distributed to participants via a WhatsApp group. This questionnaire incorporates Likert-scale items to quantitatively measure participants’ views on academic writing proficiency, engagement factors, and attitudes toward ChatGPT, using a scale ranging from 1 (strongly disagree) to 5 (strongly agree) for nuanced responses. Participants are then tasked with submitting their initial academic writing drafts electronically alongside the completed questionnaire, providing baseline metrics for their writing proficiency. These drafts undergo evaluation using qualitative rubrics that assess content relevance, coherence, argument clarity, language proficiency, and adherence to writing conventions. The rubrics are meticulously designed and scored to ensure uniformity and reliability in assessment.
During the intervention phase, participants independently interact with ChatGPT for draft revisions, utilizing its feedback and language correction functionalities. Revised drafts are reevaluated using the same rubrics to gauge improvements in quality and coherence following ChatGPT utilization. Additionally, the selection process for six EFL students for in-depth analysis is meticulously conducted, considering a range of factors to ensure a comprehensive grasp of ChatGPT’s impact. These participants are chosen based on their diverse writing capabilities, engagement levels, and experiences with ChatGPT during the intervention phase. Criteria such as the depth of insights gleaned from interactions with ChatGPT, the complexity of writing tasks tackled using the platform, and its potential influence on their writing processes are taken into consideration. Structured interviews are then conducted with these selected students, employing audio recording to facilitate detailed discussions about their perceptions, challenges, and benefits associated with ChatGPT usage. Through probing questions and extensive exploration, these interviews yield valuable insights into how ChatGPT shapes participants’ writing skills, engagement dynamics, and attitudes toward technology, contributing significantly to the overarching research outcomes.

Data Analysis

The investigation leverages SPSS 26 for quantitative exploration, specifically delving into Likert scale responses to assess participants’ sentiments concerning writing proficiency, technology integration, and their experiences with ChatGPT. The quantitative analysis employs descriptive statistics to encapsulate questionnaire findings, emphasizing the prevalence of various response choices and delineating the spectrum of agreement levels. Through this examination, a deeper understanding emerges regarding participants’ viewpoints and their levels of involvement. Moreover, qualitative inputs derived from academic writing drafts and interviews undergo a rigorous thematic analysis, uncovering pivotal themes, recurring patterns, and the nuanced perspectives of participants regarding ChatGPT's utilization. This comprehensive scrutiny delves into recurrent themes, hurdles encountered, and recommendations for enhancement, with the overarching goal of deciphering ChatGPT's impact on writing methodologies, educational experiences, and technological attitudes within the realm of EFL education. Furthermore, the study conducts an assessment of students’ work both pre and post ChatGPT intervention, employing qualitative rubrics as benchmarks. The initial drafts are instrumental in gauging writing proficiency across various dimensions such as content relevance, coherence, argumentative clarity, linguistic adeptness, and adherence to writing norms. Following revision aided by ChatGPT, the revised drafts are evaluated using the same rubrics to quantify enhancements in writing quality, coherence, and compliance with writing standards. This juxtaposed analysis yields tangible insights into the transformative influence of ChatGPT on writing efficacy.

Ethical Considerations

The ethical framework underpinning this study is foundational, commencing with the imperative of securing informed consent from every participant prior to their engagement. Participants are meticulously briefed on the study's objectives, methodologies, potential risks, and anticipated benefits, fostering a culture of transparency and voluntary involvement. Paramount to this ethos is the preservation of confidentiality and anonymity throughout the research trajectory, with stringent measures in place to securely store and restrict access to data, confining it solely to authorized personnel. Upholding participants' autonomy, the study upholds their prerogative to withdraw from the study at any juncture sans repercussion. Moreover, ethical imperatives governing data protection are meticulously observed, ensuring that all data garnered are employed exclusively for research objectives, handled with integrity, and respecting participants' rights and privacy. This ethical stance underscores the study's commitment to ethical conduct and the welfare of its participants, bolstering the credibility and integrity of its findings.

Results and Findings

The implementation of ChatGPT has brought about significant improvements in the academic writing prowess and active participation of EFL Indian undergraduate students. By analyzing the influence of this implementation on writing abilities and engagement, this research seeks to offer profound perspectives on the revolutionary capabilities of ChatGPT in elevating the academic writing journey for EFL learners.
RQ1: WHAT IS THE PERCEPTION OF EFL STUDENTS REGARDING THE IMPACT OF CHATGPT INTEGRATION ON THEIR ACADEMIC WRITING PERFORMANCE?
Exploring the perceptions of EFL students concerning the integration of ChatGPT reveals a multifaceted relationship between technology and writing proficiency. This investigation delves into the nuanced viewpoints of students regarding ChatGPT's effectiveness in augmenting their writing skills, offering valuable insights into the implications of AI tools within academic settings. Table 2 presents quantitative data that encapsulates students' perspectives on the integration of ChatGPT for enhancing writing proficiency.
The insights gleaned from Table 2 shed light on how users perceive the influence of ChatGPT on their writing skills, particularly in terms of coherence and related aspects. Analyzing the ten criteria assessed, we find a range of opinions, showcasing a nuanced perspective on ChatGPT's efficacy in improving different facets of writing. In the initial criterion (WP.1), 31.25% of respondents express strong agreement (SA) regarding ChatGPT's enhancement of clarity in Writing (n = 10 Std. Dev = 0.92). This positive feedback implies a tangible boost to coherence, making written material more lucid and comprehensible. However, there are dissenting voices (n = 3, 9.38%) and neutrals (n = 5, 15.63%), indicating that the perceived clarity enhancement of ChatGPT might not be consistent across all users, potentially impacting coherence in diverse ways. Transitioning to WP.2, a significant majority (59.38%) either agree (A) or strongly agree (SA) that ChatGPT aids in structuring ideas effectively (n = 19, Std. Dev = 0.87), thus contributing to coherence in organizing content and thoughts. This collective positive sentiment resonates with ChatGPT's role in enhancing coherence. However, there are dissenting views (n = 4, 12.5%) and neutral responses (n = 6, 18.75%), indicating varying user experiences and potential effects on coherence in content organization. Within WP.3, approximately 31.25% strongly agree (SA) that ChatGPT facilitates clear expression of complex ideas (n = 10, Std. Dev = 0.91), which positively impacts coherence when conveying intricate concepts. This observation highlights ChatGPT's potential to bolster coherence in handling complex content. Nevertheless, some respondents disagree (n = 4, 12.50%) or remain neutral (n = 4, 12.5%), revealing a blend of perspectives and experiences concerning its impact on coherence. Moving to WP.4, where no definitive consensus emerges, the standard deviation of 0.92 reflects varied insights into ChatGPT's influence on coherence through the use of academic vocabulary. Responses are distributed across categories (n = 4, 12.5% for D; n = 5, 15.63% for N; n = 6, 18.75% for A; n = 7, 21.88% for SA), indicating diverse user experiences and potential impacts on coherence in academic writing.
In WP.5, a substantial portion (37.50%) strongly agrees (SA) that their writing skills have seen improvement with ChatGPT (n = 12 Std. Dev = 0.94), which positively impacts overall coherence in writing proficiency. The variability in responses (n = 2, 6.25% for D; n = 4, 12.5% for N; n = 5 15.63% for A) indicates diverse impacts on coherence, highlighting individual experiences in skill enhancement and its coherence implications. WP.6 presents a standard deviation of 0.86, reflecting mixed insights into ChatGPT’s influence on coherence in source citation practices. While some users may find it moderately helpful for coherence in referencing (n = 6, 18.75% for A; n = 8, 25% for SA), others may not perceive the same level of support (n = 5 15.63% for D; n = 5, 15.63% for N), signifying varied experiences and potential impacts on coherence in citing sources. WP.7 indicates that 28.13% strongly agree (SA) that ChatGPT aids in producing well-structured paragraphs (n = 9 Std. Dev = 0.97), contributing positively to coherence in written organization. However, the high standard deviation suggests diverse views (n = 3, 9.38% for D; n = 6, 18.75% for N; n = 7, 21.88% for A), affecting coherence differently for users based on their perceptions of structured writing. In WP.8, -34.38% strongly agree (SA) that ChatGPT helps maintain coherence in writing (n = 11, Std. Dev = 0.90), with varied responses (n = 5, 15.63% for D; n = 3, 9.38% for N; n = 9, 28.13% for A). This suggests that while some users experience significant coherence maintenance support, others may not perceive the same level of impact, influencing coherence differently. WP.9 shows that 37.50% heavily rely on ChatGPT for grammar tasks (n = 12, Std. Dev = 0.95), impacting coherence differently across users. While some users find ChatGPT highly beneficial for coherence in grammar usage, others may not rely as heavily on it, affecting coherence maintenance in writing. Lastly, WP.10 reveals no clear majority in any category (Std. Dev = 0.91) with varied perceptions of feedback quality and usefulness (n = 3, 9.38% for D; n = 6, 18.75% for N; n = 6, 18.75% for A; n = 8 25% for SA), influencing coherence differently among users. This indicates that the quality and usefulness of feedback provided by ChatGPT may influence coherence in writing improvement to varying extents among users.
The data paints a nuanced picture of ChatGPT's influence on coherence in writing. It illuminates a spectrum of effects ranging from positive outcomes like improved clarity, enhanced skills, and better organizational structure to varying experiences and viewpoints, hinting at the diverse impact of ChatGPT on coherence among users. These results underscore the need to tailor assessments of ChatGPT's coherence-enhancing abilities to individual user experiences and requirements in writing scenarios.
Examining the Engagement Factors (EF) outlined in Table 3 based on questionnaire responses, insights emerge regarding participants' attitudes and perceptions regarding ChatGPT's utility in writing assignments. The analysis brings to light aspects such as motivation, enjoyment, curiosity across different subjects, feedback appreciation, and the level of effort invested, reflecting an overall positive engagement with ChatGPT among the surveyed individuals.
The data from Table 3 provides insights into how participants engage with ChatGPT across various dimensions like motivation, enjoyment, interest in diverse topics, confidence, feedback appreciation, effort, active utilization, focus, belief in its importance for academic writing, and goal-setting effectiveness. In EF.1, a significant portion (31.25%) strongly agrees (SA) that ChatGPT is a motivator for writing more frequently (n = 10, Std. Dev = 0.92), indicating a positive impact on motivation and engagement. However, there are also instances of disagreement (n = 3, 9.38%) and neutrality (n = 4, 12.5%), showcasing varying levels of motivation influenced by ChatGPT. Moving to EF.2, a substantial percentage (40.62%) strongly agrees (SA) that ChatGPT enhances the enjoyment of writing (n = 13 Std. Dev = 0.90), signaling a significant positive effect on engagement and enjoyment. Nevertheless, a small fraction (3.12%) disagrees (D), while some respondents remain neutral (n = 3, 9.38%), reflecting diverse experiences in terms of enjoyment and engagement. EF.3 indicates that 34.38% strongly agree (SA) that ChatGPT ignites their interest in exploring diverse topics (n = 11, Std. Dev = 0.91), indicating a positive impact on engagement with varied content. However, some respondents3 express disagreement (n = 4, 12.5%) or neutrality (n = 4, 12.5%), suggesting differing levels of interest and engagement in exploring different topics facilitated by ChatGPT. In EF.4, 31.25% strongly agree (SA) that ChatGPT enhances their confidence in writing effectively (n = 10, Std. Dev = 0.87), signaling a positive impact on confidence and engagement. Nevertheless, some respondents disagree (n = 5, 15.63%) or are neutral (n = 3, 9.38%), indicating varied levels of confidence and engagement influenced by ChatGPT. Furthermore, EF.5 reveals that 34.38% of respondents highly value the feedback and suggestions provided by ChatGPT (n = 11, Std. Dev = 0.86), showcasing a notable positive impact on engagement through feedback appreciation. However, a segment of respondents disagrees (n = 5, 15.63%) or remains neutral (n = 4, 12.5%), indicating diverse levels of appreciation and engagement with ChatGPT’s feedback. Within EF.6, 37.5% strongly agree (SA) that ChatGPT motivates them to invest more effort into their writing tasks (n = 12, Std. Dev = 0.90), suggesting a tangible positive influence on effort and engagement. Nevertheless, some respondents disagree (n = 4, 12.5%) or are neutral (n = 4, 12.5%), revealing varying degrees of effort and engagement influenced by ChatGPT. EF.7 illustrates that 34.38% of respondents actively seek opportunities to utilize ChatGPT for writing practice (n = 11 Std. Dev = 0.88), pointing towards a positive impact on engagement and proactive usage. However, some respondents disagree (n = 4, 12.5%) or are neutral (n = 4, 12.5%), indicating differing levels of active utilization and engagement with ChatGPT. In EF.8, 37.5% strongly agree (SA) that ChatGPT aids in maintaining focus and engagement during writing sessions (n = 12, St. Dev = 0.89), indicating a beneficial effect on concentration and engagement. Nonetheless, some respondents disagree (n = 4, 12.5%) or remain neutral (n = 4, 12.5%), showcasing varied experiences regarding focus and engagement during writing with ChatGPT. EF.9 demonstrates that 37.5% strongly believe in ChatGPT’s significance for enhancing their academic writing skills (n = 12, Std. Dev = 0.88), underscoring a positive impact on engagement and perceived importance for skill enhancement. However, some respondents disagree (n = 4, 12.5%) or are neutral (n = 4, 12.5%), highlighting differing levels of belief in ChatGPT’s importance for improving academic writing skills. Lastly, in EF.10, 34.38% strongly agree (SA) that establishing effective goals with ChatGPT has boosted their writing engagement (n = 11 Std. Dev = 0.90), signaling a positive impact on engagement through effective goal setting. Nevertheless, some respondents disagree (n = 4, 12.5%) or are neutral (n = 3, 9.38%), indicating varied experiences regarding goal-setting effectiveness and engagement influenced by ChatGPT. The analysis of Table 3 reveals a consistently positive influence of ChatGPT across various dimensions of participant engagement, including motivation, enjoyment, interest in diverse topics, confidence, feedback appreciation, effort, active utilization, focus, belief in its importance for academic writing, and goal-setting effectiveness. Nonetheless, variations in responses highlight the diversity of experiences and perceptions among users regarding ChatGPT's impact on their engagement in writing tasks.
Turning to the findings from Table 4, which explores Attitudes Towards Technology (ATT), participants' perceptions and attitudes towards utilizing ChatGPT for writing tasks are illuminated. The data indicates predominantly positive sentiments regarding comfort, skill enhancement, preference over traditional methods, utility of features, openness to learning, trust in feedback, usability, confidence, impact of feature exploration, and overall attitude improvement towards technology for writing tasks. However, these attitudes are tempered by diverse perspectives and experiences evident in the responses.
Table 4 offers an intricate analysis of participants’ attitudes toward technology, focusing particularly on their perceptions of employing ChatGPT for writing tasks. ATT.1 sheds light on the fact that 31.25% of respondents strongly agree (SA) that they are comfortable using ChatGPT for writing tasks (n=10, Std. Dev = 0.89), showcasing a positive inclination toward technology adoption and utilization. However, there are also expressions of disagreement (n = 3, 9.38%) or neutrality (n = 4, 12.5%), indicating differing levels of comfort and acceptance regarding ChatGPT. Moving to ATT.2, we find that 50% of respondents either agree (A) or strongly agree (SA) that ChatGPT significantly enhances their writing skills (n = 16, Std. Dev = 0.86), signaling a positive outlook on technology’s role in skill enhancement. Nevertheless, some respondents also express disagreement (n= 5, 15.63%) or neutrality (n = 7, 21.88%), showcasing varied perceptions regarding ChatGPT’s impact on improving writing skills. ATT.3 reveals that 59.38% prefer using ChatGPT over traditional methods for writing assistance (n = 19, Std. Dev = 0.88), indicating a strong inclination towards technology-based writing support. However, there are also expressions of disagreement (n = 4, 12.5%) or neutrality (n = 6, 18.75%), showcasing differing preferences and attitudes towards technology adoption in writing tasks. Additionally, ATT.4 demonstrates that a majority (53.13%) agree (A) or strongly agree (SA) that ChatGPT’s features are helpful and contribute to better writing outcomes (n = 17, Std. Dev = 0.87), underscoring a positive perception of technology’s utility and its contribution to enhancing writing quality. Nonetheless, some respondents also express disagreement (n = 5, 15.63%) or neutrality (n = 6, 18.75%), indicating diverse perspectives on ChatGPT’s features and their impact on writing outcomes. Moreover, ATT.5 reveals that 59.38% are open to learning and utilizing new technologies like ChatGPT (n = 19, Std. Dev = 0.88), indicating a positive attitude towards technology adoption and a willingness to explore new tools for writing assistance. However, there are also expressions of disagreement (n = 4, 12.5%) or neutrality (n = 6, 18.75%), suggesting varying levels of openness and willingness to adopt new technologies. In analyzing ATT.6, a substantial majority (50%) express agreement (A) or strong agreement (SA) in their trust in ChatGPT to provide accurate feedback (n = 16 Std. Dev = 0.87), indicating a positive perception of the technology's reliability in feedback provision. Conversely, a notable segment expresses disagreement (n = 5, 15.63%) or neutrality (n = 7, 21.88%), showcasing varying levels of trust in ChatGPT’s feedback accuracy. Moving to ATT.7, we observe that 59.38% agree (A) or strongly agree (SA) that ChatGPT has streamlined writing tasks, making them more efficient (n = 19, Std Dev = 0.89), highlighting a favorable perception of the technology’s usability and efficiency. However, there are also expressions of disagreement (n = 4, 12.5%) or neutrality (n = 6, 18.75%), indicating diverse experiences and views regarding ChatGPT’s ease of use and efficiency. In ATT.8, 50% agree (A) or strongly agree (SA) that they are confident in using ChatGPT for various writing purposes (n = 16, Std. Dev = 0.86), demonstrating a positive attitude towards the technology’s versatility and their proficiency in utilizing it. Nevertheless, dissenting opinions are present, with some expressing disagreement (n = 5, 15.63%) or neutrality (n = 7, 21.88%), reflecting differing levels of confidence in using ChatGPT. ATT.9 unveils that 53.13% agree (A) or strongly agree (SA) that exploring ChatGPT’s features has positively impacted their writing experience (n = 17, Std. Dev = 0.87), indicating a favorable perception of the technology’s impact on writing experience through feature exploration. However, there are also expressions of disagreement (n = 5, 15.63%) or neutrality (n = 6, 18.75%), suggesting varying experiences and perspectives regarding feature exploration and its impact. Lastly, in ATT.10, 59.38% agree (A) or strongly agree (SA) that ChatGPT has improved their overall attitude towards using technology for writing (n = 19 Std. Dev = 0.87), illustrating a positive overall impact of technology on their attitudes towards writing tasks. Nonetheless, differing viewpoints are evident, with some expressing disagreement (n = 4, 12.5%) or neutrality (n = 6, 18.75%), reflecting diverse attitudes and experiences concerning technology’s role in writing. The findings from Table 4 underline positive attitudes towards technology, particularly ChatGPT, across various dimensions such as comfort, skill enhancement, preference over traditional methods, feature utility, openness to learning, trust in feedback, usability, confidence, feature exploration impact, and overall attitude improvement towards technology for writing tasks. However, the responses also reflect diverse perspectives and experiences, emphasizing the importance of considering individual preferences and experiences when evaluating attitudes towards technology adoption in writing contexts.
RQ2: HOW DOES THE INTEGRATION OF CHATGPT IMPACT STUDENT ENGAGEMENT IN ACADEMIC WRITING TASKS AMONG EFL STUDENTS?
Delving into the realm of academic writing, our second research question (RQ) seeks to unravel the influence of integrating ChatGPT on the engagement levels of EFL students. Analyzing the works of these students, especially focusing on their initial and revised drafts, has uncovered several pivotal insights to comprehensively address this RQ.
At the genesis of our study, the evaluation of students' initial drafts served as a cornerstone for assessing their writing prowess. Utilizing meticulous qualitative rubrics covering content relevance, structural coherence, argument clarity, language proficiency, and adherence to writing norms, we established a robust and impartial evaluation framework. Transitioning into the intervention phase with ChatGPT, participants actively harnessed the tool to refine their drafts, leveraging its feedback, language correction, coherence enhancement, and improvement suggestions. This phase emphasized participant autonomy in engaging with ChatGPT, fostering a tailored and adaptive writing revision process. Following the ChatGPT intervention, participants submitted their revised drafts for evaluation using the same qualitative rubrics as the initial drafts, enabling a nuanced exploration of improvements in writing quality, coherence, and adherence to conventions.
The findings from this process unveiled significant enhancements across various dimensions of academic writing. Notably, there was a substantial improvement in the structural coherence and logical flow of ideas in the revised drafts, showcasing ChatGPT’s efficacy in refining organization and argumentation. Moreover, participants demonstrated advancements in language proficiency, showcasing improvements in academic vocabulary usage, grammar accuracy, and adherence to conventions. This underscores ChatGPT’s role in refining linguistic expression in academic contexts.
Despite these positive outcomes, challenges such as potential over-reliance on ChatGPT and maintaining originality surfaced. Some participants grappled with balancing ChatGPT’s assistance while preserving their unique writing style and voice. This highlights the significance of nurturing critical thinking and autonomy alongside technological integration in academic writing processes. Our study emphasizes not just the impact of ChatGPT on writing outcomes but also the importance of cultivating students’ individuality and critical thought in their academic endeavors.
To provide a concrete illustration, let's delve into Figure 1, showcasing a segment from a student's initial draft. Through a meticulous analysis employing established qualitative rubrics, we can pinpoint precise areas where enhancements were needed in coherence, organization, language proficiency, and adherence to writing standards. This detailed examination serves as a tangible demonstration of how integrating ChatGPT can tangibly elevate the academic writing skills of English as a Foreign Language (EFL) students.
Figure 2 depicts a juxtaposition between the initial and revised versions of a student's academic composition, showcasing the discernible improvements attributed to the integration of ChatGPT. In the preliminary iteration (Figure 1), the text exhibited various deficiencies, notably in coherence and organization. The paragraphs lacked a discernible structure, presenting ideas disjointedly and impeding a seamless flow. Moreover, language proficiency issues, including grammatical inaccuracies and inconsistencies in academic vocabulary employment, were apparent.
Following revision aided by ChatGPT, substantial enhancements emerged in the subsequent draft (Figure 2). The revised passages showcased a more cohesive progression of ideas, with enhanced coherence and structural arrangement. Refinements in transitions between sentences and paragraphs contributed to a clearer and more unified argumentation.
ChatGPT's feedback mechanisms proved instrumental in addressing language proficiency concerns, leading to improved grammatical accuracy, refined academic vocabulary utilization, and adherence to writing conventions. Figure 2 exemplifies the integration of ChatGPT's feedback, resulting in the elimination of redundant phrases, sentence streamlining, and overall text readability enhancement. The revisions signify a deeper exploration of the topic, evident through expanded analyses, increased evidence integration, and a more nuanced presentation of ideas. Furthermore, the student effectively maintained originality while leveraging ChatGPT's suggestions, striking a harmonious balance between technological assistance and preserving their unique writing style and voice. Figure 2 underscores the tangible impact of ChatGPT integration in enhancing the quality of academic writing among English as a Foreign Language (EFL) students. The improvements in coherence, organization, language proficiency, and adherence to writing conventions collectively contribute to a polished final draft, underscoring the efficacy of technology in fostering student engagement and proficiency in academic writing endeavors.
Additionally, insights gleaned from interviews conducted with six chosen EFL students (referred to as P1-P6 for anonymity) offer a nuanced perspective on the influence of ChatGPT on their writing proficiency, engagement, and technological perception. Thematic analysis unveiled a spectrum of viewpoints encompassing both favorable and unfavorable observations across various pivotal themes.

Theme 1: Writing Improvement

The interviews with the selected EFL students provided valuable insights into how ChatGPT influenced their writing skills, particularly focusing on improvement. The participants shared a mix of positive and negative experiences regarding the tool’s feedback and suggestions.
Participant 1 (P1) emphasized, “ChatGPT’s feedback helped me improve the structure of my essays, leading to clearer arguments.” This indicates that ChatGPT’s feedback played a crucial role in enhancing the organization and coherence of P1’s writing, resulting in more effective communication of ideas. Similarly, Participant 4 (P4) expressed, “Using ChatGPT’s language suggestions improved the fluency and coherence of my writing.” This suggests that P4 found ChatGPT’s language assistance beneficial in enhancing the flow and coherence of their written work, contributing to overall writing improvement. These positive insights underscore the value of ChatGPT in providing targeted feedback and language suggestions that contribute to structural and linguistic enhancements in the students’ writing. The specific examples provided by P1 and P4 demonstrate the tangible impact of ChatGPT on enhancing the clarity and coherence of their written compositions.
However, Participant 3 (P3) mentioned, “While ChatGPT offered helpful suggestions, I sometimes struggled to incorporate them effectively into my writing.” This indicates that despite the usefulness of ChatGPT’s suggestions, P3 faced challenges in seamlessly integrating them into their writing, possibly due to issues such as adapting suggestions to fit their writing style or context. Similarly, Participant 6 (P6) also noted, “I found that ChatGPT’s corrections were sometimes too general, and I needed more specific feedback.” This suggests that P6 encountered limitations with ChatGPT’s feedback, particularly in terms of its specificity, which may have impacted their ability to address specific areas for improvement. These insights highlight potential challenges and limitations associated with ChatGPT’s feedback and corrections. P3’s experience underscores the importance of ensuring that AI-driven feedback aligns well with individual writing styles and contexts to facilitate effective incorporation. Likewise, P6’s feedback emphasizes the need for more tailored and specific feedback from ChatGPT to address nuanced aspects of writing, indicating areas for potential enhancement in the tool’s functionality.

Theme 2: Engagement and Motivation

The interviews conducted with the chosen EFL students provided valuable insights into their levels of engagement and motivation while utilizing ChatGPT for writing tasks. Each participant's experience varied, showcasing a blend of positive and negative encounters with the tool.
Participant 2 (P2) expressed, “Interacting with ChatGPT motivated me to write more frequently and explore new topics.” This highlights how ChatGPT’s interactive elements played a pivotal role in boosting P2’s motivation, fostering a keen interest in writing and a desire to delve into diverse subjects. Similarly, Participant 5 (P5) mentioned, “ChatGPT’s interactive features kept me engaged during writing sessions.” This indicates that P5 found ChatGPT’s interactive aspects effective in maintaining focus and sustaining interest, leading to prolonged engagement during writing sessions. These observations underline ChatGPT’s positive impact on enhancing engagement and motivation among EFL students. The experiences shared by P2 and P5 underscore the tool’s ability to spark curiosity, promote frequent writing, and sustain engagement through interactive functionalities, thereby enriching the writing experience.
Despite these positive experiences, Participant 1 (P1) expressed, “I sometimes felt overwhelmed by the amount of feedback from ChatGPT, which affected my motivation.” This suggests that while ChatGPT offers feedback, its volume or nature may occasionally overwhelm P1, influencing their motivation to continue writing. Participant 4 (P4) also noted, “While ChatGPT was helpful, I still preferred human feedback for more personalized guidance.” This indicates that P4 valued human feedback over ChatGPT’s feedback, indicating a preference for tailored guidance in the writing process. These insights shed light on potential challenges in maintaining optimal levels of motivation and engagement with ChatGPT. P1’s experience emphasizes the need to strike a balance in feedback volume to prevent overwhelming students, while P4’s preference for human feedback highlights the significance of considering individual preferences for guidance in writing tasks.

Theme 3: Attitudes Towards Technology

The interviews conducted also yielded valuable insights into the participants’ attitudes towards technology, particularly in relation to their experiences with ChatGPT during writing tasks.
Participant 3 (P3) mentioned, “Using ChatGPT improved my confidence in using technology for writing tasks.” This indicates that P3 perceived ChatGPT as a tool that boosted their confidence in employing technology for writing purposes, showcasing a positive attitude towards technology integration. Participant 6 (P6) stated, “ChatGPT introduced me to new writing tools and made me more open to technology in education.” This suggests that P6’s interaction with ChatGPT resulted in a broader acceptance and receptiveness towards technology in educational settings, indicating a favorable shift in attitudes. These insights highlight ChatGPT’s role in fostering positive attitudes towards technology among EFL students. P3’s enhanced confidence in technology use and P6’s openness to new tools exemplify the tool’s potential to influence perceptions and attitudes towards technology integration in educational contexts.
However, not all experiences were entirely positive. Participant 2 (P2) expressed, “I had concerns about the reliability of ChatGPT’s suggestions, which affected my trust in the tool.” This indicates that P2 harbored doubts regarding the reliability of ChatGPT’s suggestions, leading to a decrease in trust in the tool’s capabilities. Participant 5 (P5) noted, “While ChatGPT was useful, I still prefer traditional writing methods for certain tasks.” This suggests that despite finding ChatGPT beneficial, P5 maintained a preference for traditional writing approaches in specific contexts. These insights highlight challenges related to trust in AI tools and a persistent preference for traditional methods despite the utility of ChatGPT. P2’s concerns about reliability stress the importance of ensuring accuracy and dependability in AI-driven tools to build and maintain trust among users, while P5’s preference underscores the ongoing relevance of traditional writing methods alongside technological advancements.
The interviews with the six selected EFL students provided a holistic view of their encounters with ChatGPT in academic writing tasks. Across themes of writing enhancement, engagement and motivation, and attitudes towards technology, a spectrum of positive and negative insights emerged. While participants acknowledged the benefits of ChatGPT’s feedback in enhancing writing quality and motivation, challenges such as feedback effectiveness, trust in AI suggestions, and balancing technology with traditional teaching methods were also highlighted. These findings underscore the nuanced nature of integrating AI tools like ChatGPT into academic writing contexts. They provide valuable guidance for optimizing the use of such tools to support and enhance, rather than replace, writing instruction in EFL environments.

Discussions

This study endeavors to conduct a comprehensive analysis of the impacts of integrating ChatGPT into academic writing tasks among EFL students, focusing on writing enhancement, engagement levels, and attitudes toward technology.
Consistent with prior research, our study reaffirms the significant role of AI tools such as ChatGPT in improving writing quality by providing feedback on various dimensions like structure, coherence, and fluency. These outcomes resonate with participants' recognition of the advantages of AI-driven feedback in refining their writing skills (Bašić et al., 2023; Yan, 2023a; Zheng & Zhan, 2023). However, a noteworthy departure from past studies was observed regarding the effectiveness of incorporating AI suggestions into the writing process. While some participants found ChatGPT's feedback beneficial and easily integrable, others faced challenges in effectively utilizing the suggestions, echoing the difficulties noted by (Baskara, 2023) and Steiss et al. (2024) in their AI-assisted writing research.
Additionally, our study thoroughly investigates engagement and motivation, a key focus in prior AI integration research in education. Consistent with Dergaa et al. (2023) and Song and Song (2023), our participants reported heightened motivation and engagement due to their interactions with ChatGPT. Particularly, the tool's interactive features received praise for sustaining student engagement during writing sessions. However, conflicting viewpoints emerged concerning the impact of feedback overload on motivation, an aspect not extensively explored in earlier studies (Aljanabi et al., 2023; Yan, 2023a).
Regarding attitudes toward technology, our study aligns with Cooper (2023) and Zheng and Zhan (2023) in revealing a positive shift in participants' attitudes toward utilizing AI tools for writing tasks. Participants expressed increased confidence in using technology and a willingness to explore new writing tools. This positive trend mirrors findings from previous studies on technology adoption in educational environments (Adiguzel et al., 2023; Javaid et al., 2023). However, concerns about the reliability of AI suggestions and preferences for traditional methods for certain tasks were evident, indicating a nuanced perspective warranting further exploration.
A key contribution of our study lies in the detailed exploration of specific challenges and opportunities arising from ChatGPT's integration into academic writing tasks. For example, while the tool's feedback was generally beneficial, participants highlighted challenges in interpreting and effectively applying the suggestions, especially in complex writing tasks. This finding contrasts with earlier studies emphasizing the ease of use and immediate impact of AI feedback on writing quality (Bašić et al., 2023; Zheng & Zhan, 2023). Moreover, our study extends the discourse on technology attitudes by examining the interplay between perceived usefulness, trust, and skepticism toward AI tools. While participants generally recognized ChatGPT's benefits, concerns about data privacy, algorithmic bias, and the limitations of AI-generated feedback were expressed. This nuanced perspective underscores the importance of addressing ethical and technical considerations in AI integration initiatives to build trust and ensure responsible technology use in education (Baskara, 2023; Steiss et al., 2024).
Our findings present a nuanced view of ChatGPT's integration into academic writing tasks among EFL students, confirming and deviating from prior research. Consistent with existing literature, the study confirms the significant impact of AI tools like ChatGPT in improving writing quality, with participants acknowledging the benefits of AI-driven feedback. However, differing from previous studies, our research emphasizes the need for cautious integration of AI suggestions into the writing process for maximum effectiveness. Furthermore, the study highlights ChatGPT's positive influence on student engagement and motivation, in line with findings on AI integration in education. Nonetheless, concerns about feedback overload and preferences for traditional methods indicate a complex relationship between AI support and student motivation. These findings stress the importance of adopting nuanced approaches to AI integration and suggest areas for further research to explore the long-term effects and ethical considerations of AI use in educational settings.

Conclusions

The outcomes of this investigation unveil a nuanced comprehension of integrating ChatGPT into academic writing tasks among EFL students. Encouragingly, participants consistently noted enhancements in writing quality, attributing ChatGPT's feedback to bolstering structural coherence, language fluency, and overall argumentation clarity. These discoveries affirm the effectiveness of AI-driven tools in supporting writing processes. Moreover, the research revealed a positive influence on student engagement and motivation, as ChatGPT's interactive features stimulated a spirit of exploration and encouraged regular writing practice. Nonetheless, amidst these positive revelations, the study also brought to light several challenges and negative perceptions. Some participants expressed concerns about the overwhelming volume of feedback from ChatGPT, signaling a necessity for more tailored and context-specific suggestions to mitigate information overload. Additionally, while ChatGPT significantly contributed to writing enhancement, there were instances where students favored human feedback for its personalized and nuanced guidance, indicating a preference for a hybrid approach blending AI tools and human input. These findings carry dual implications. They underscore the potential of AI technologies like ChatGPT in enhancing writing skills and fostering student engagement, enabling educators to provide timely and constructive feedback to enrich the learning journey. However, the study underscores the criticality of carefully integrating and customizing AI suggestions to effectively address individual student needs and preferences. However, it is important to acknowledge several limitations. The study's focus on a specific cohort of EFL students within a particular educational setting may restrict the generalizability of the findings. Moreover, reliance on self-reported data and the subjective nature of feedback assessment pose inherent challenges in accurately capturing the full range of student experiences and perspectives. To overcome these limitations and advance current research, future studies could adopt a longitudinal approach to assess the sustained impacts of AI integration on writing skills and student motivation. Including diverse cohorts and exploring alternative AI tools could offer a more comprehensive understanding of technology's role in education. Overall, this study provides valuable insights into the intricate dynamics of AI integration in academic writing tasks and lays a foundation for further exploration in this evolving domain.

References

  1. Adiguzel, T.; Kaya, M.H.; Cansu, F.K. Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemp. Educ. Technol. 2023, 15, ep429. [Google Scholar] [CrossRef]
  2. Aljanabi, M. , Ghazi, M., Ali, A. H., Abed, S. A., & ChatGpt. (2023). ChatGpt: Open possibilities. Iraqi Journal for Computer Science and Mathematics, 4(1), 62-64.
  3. Ariyaratne, S. , Iyengar, Karthikeyan. P., Nischal, N., Chitti Babu, N., & Botchu, R. (2023). A comparison of ChatGPT-generated articles with human-written articles. Skeletal Radiology.
  4. Bai̇doo-Anu, D.; Ansah, L.O. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. SSRN Electronic Journal 2023, 7, 52–62. [Google Scholar] [CrossRef]
  5. Bašić, Ž. , Banovac, A., Kružić, I., & Jerković, I. (2023). ChatGPT-3.5 as writing assistance in students’ essays. Humanities and Social Sciences Communications, 10(1), 750.
  6. Baskara, F.R. Integrating ChatGPT into EFL writing instruction: Benefits and challenges. Int. J. Educ. Learn. 2023, 5, 44–55. [Google Scholar] [CrossRef]
  7. Baskara, F. X. R. , & Mukarto, F. X. (2023). Exploring the implications of ChatGPT for language learning in higher education. Indonesian Journal of English Language Teaching and Applied Linguistics, 7(2), 343-358.
  8. Cassidy C (2023) Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian Online.
  9. Cassidy C (2023) Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian Online. [Link](https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays).
  10. Chapelle, C. A. (2001). Computer applications in second language acquisition: Foundations for teaching, testing, and research. Cambridge University Press eBooks.
  11. Chapelle, C. A. (2001). Computer applications in second language acquisition: Foundations for teaching, testing, and research. Cambridge University Press eBooks. [Link](http://assets.cambridge.org/97805216/26460/frontmatter/9780521626460_frontmatter.pdf).
  12. Chen, J.; Zhang, L.J. Assessing student-writers’ self-efficacy beliefs about text revision in EFL writing. Assess. Writ. 2019, 40, 27–41. [Google Scholar] [CrossRef]
  13. Cooper, G. Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. J. Sci. Educ. Technol. 2023, 32, 444–452. [Google Scholar] [CrossRef]
  14. Dergaa, I.; Chamari, K.; Zmijewski, P.; Ben Saad, H. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol. Sport 2023, 40, 615–622. [Google Scholar] [CrossRef] [PubMed]
  15. Devlin, J. , Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv.
  16. Devlin, J. , Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv. [CrossRef]
  17. Duha, M. S. U. (2023). ChatGPT in education: An opportunity or a challenge for the future? TechTrends, 67(3), 402-403.
  18. Eguaras, R. C. , Ugalde, M. C., & Matas, G. M. (2021). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 1(04), 1-19.
  19. Eguaras, R. C. , Ugalde, M. C., & Matas, G. M. (2021). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 1(04), 1-19. [CrossRef]
  20. El-Maghraby, A.-S. A. (2021). Investigating the effectiveness of Moodle based blended learning in developing writing skills for university students. Journal of Research in Curriculum Instruction and Educational Technology, 7(1), 115-140.
  21. Elsen-Rooney, M. (2023). NYC bans access to ChatGPT on school computers, networks. Chalkbeat New York.
  22. Elsen-Rooney, M. (2023). NYC bans access to ChatGPT on school computers, networks. Chalkbeat New York. [Link](https://ny.chalkbeat.org/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence).
  23. ers & Education, 59(1), 95-109. [CrossRef]
  24. Floridi, L. , & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681–694.
  25. Floridi, L. , & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681–694. [CrossRef]
  26. Fryer LK, Coniam D, Carpenter R; et al. (2020) Bots for language learning now: Current and future directions. Language Learning & Technology, 24(2), 8–22.
  27. Fryer LK, Coniam D, Carpenter R; et al. (2020) Bots for language learning now: Current and future directions. Language Learning & Technology, 24(2), 8–22. [Link](http://hdl.handle.net/10125/44719).
  28. Golparvar, S.E.; Khafi, A. The role of L2 writing self-efficacy in integrated writing strategy use and performance. Assess. Writ. 2020, 47, 100504. [Google Scholar] [CrossRef]
  29. Golparvar, S.E.; Khafi, A. The role of L2 writing self-efficacy in integrated writing strategy use and performance. Assess. Writ. 2020, 47, 100504. [Google Scholar] [CrossRef]
  30. Halaweh, M. ChatGPT in education: Strategies for responsible implementation. Contemp. Educ. Technol. 2023, 15, ep421. [Google Scholar] [CrossRef]
  31. Hassani, H.; Silva, E.S. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field. Big Data Cogn. Comput. 2023, 7, 62. [Google Scholar] [CrossRef]
  32. Herbold, S. , Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports, 13(1), 18617. [CrossRef]
  33. Huang, W.; Hew, K.F.; Fryer, L.K. Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. J. Comput. Assist. Learn. 2021, 38, 237–257. [Google Scholar] [CrossRef]
  34. Hwang, G.-J.; Chang, C.-Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 2023, 31, 4099–4112. [Google Scholar] [CrossRef]
  35. İnal, S.; Turhanlı, I. Teachers ' opinions on the use of L1 in EFL classes. J. Lang. Linguist. Stud. 2019, 15, 861–875. [Google Scholar] [CrossRef]
  36. Jones R and Hafner C (2022) Understanding Digital literacies: A practical introduction. New York: Routledge. [CrossRef]
  37. Kerly, A.; Hall, P.; Bull, S. Bringing chatbots into education: Towards natural language negotiation of open learner models. Knowledge-Based Syst. 2007, 20, 177–185. [Google Scholar] [CrossRef]
  38. Khan Academy. (2023). World-class AI for education: Say hello to Khanmigo, Khan Academy’s AI-powered guide. Tutor for learners. Assistant for teachers. [Link](https://www.khanacademy.org/khan-labs).
  39. King, M. R. , & ChatGPT. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16, 1-2. [CrossRef]
  40. Koraishi, O. (2023). Teaching English in the age of AI: Embracing ChatGPT to optimize EFL materials and assessment. Language Education & Technology, 3(1), 55-72.
  41. Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf. Technol. 2022, 28, 973–1018. [Google Scholar] [CrossRef]
  42. Kukulska-Hulme, A.; Shield, L. An overview of mobile assisted language learning: From content delivery to supported collaboration and interaction. ReCALL 2008, 20, 271–289. [Google Scholar] [CrossRef]
  43. Latham, A.; Crockett, K.; McLean, D.; Edmonds, B. A conversational intelligent tutoring system to automatically predict learning styles. Comput. Educ. 2012, 59, 95–109. [Google Scholar] [CrossRef]
  44. Lee JH, Yang H, Shin D; et al. (2020) Chatbots – technology for the language teacher. ELT Journal, 74(3), 338–344. [CrossRef]
  45. Liu, B. Chinese University Students’ Attitudes and Perceptions in Learning English Using ChatGPT. Int. J. Educ. Humanit. 2023, 3, 132–140. [Google Scholar] [CrossRef]
  46. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z. ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 2023, 74, 570–581. [Google Scholar] [CrossRef]
  47. MacNeil, S. , Tran, A., Mogil, D., Bernstein, S., Ross, E., & Huang, Z. (2022, August). Generating diverse code explanations using the GPT-3 large language model. Proceedings of the 2022 ACM Conference on International Computing Education Research, Volume 2, 37-39.
  48. Natalia, D.E.; Asib, A.; Kristina, D. The Application of Authentic Assessment for Students Writing Skill. J. Educ. Hum. Dev. 2018, 7. [Google Scholar] [CrossRef]
  49. North, B. The CEFR Illustrative Descriptor Scales. Mod. Lang. J. 2007, 91, 656–659. [Google Scholar] [CrossRef]
  50. Pardo, A.; Jovanovic, J.; Dawson, S.; Gašević, D.; Mirriahi, N. Using learning analytics to scale the provision of personalised feedback. Br. J. Educ. Technol. 2017, 50, 128–138. [Google Scholar] [CrossRef]
  51. Reinders, H. , & White, C. (2016). 20 years of autonomy and technology: How far have we come and where to next? Language Learning & Technology, 20(2), 143–154. [Link](https://eric.ed.gov/?id=EJ110354).
  52. Roohani, A.; Rad, H.S. Effectiveness of hybrid-flipped classroom in improving EFL learners’ argumentative writing skill. TEFLIN J. - A Publ. Teach. Learn. Engl. 2022, 33. [Google Scholar] [CrossRef]
  53. Schmidt-Fajlik, R. ChatGPT as a Grammar Checker for Japanese English Language Learners: A Comparison with Grammarly and ProWritingAid. AsiaCALL Online J. 2023, 14, 105–119. [Google Scholar] [CrossRef]
  54. Song, C.; Song, Y. Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Front. Psychol. 2023, 14, 1260843. [Google Scholar] [CrossRef] [PubMed]
  55. Steiss, J.; Tate, T.; Graham, S.; Cruz, J.; Hebert, M.; Wang, J.; Moon, Y.; Tseng, W.; Warschauer, M.; Olson, C.B. Comparing the quality of human and ChatGPT feedback of students’ writing. Learn. Instr. 2024, 91. [Google Scholar] [CrossRef]
  56. Sure, here are all the references listed in serial order:.
  57. Teng, L. S. , & Zhang, L. J. (2020). Empowering learners in the second/foreign language classroom: Can self-regulated learning strategies-based writing instruction make a difference? Journal of Second Language Writing, 48(February), 100701.
  58. Wu, Z. Lower English proficiency means poorer feedback performance? A mixed-methods study. Assess. Writ. 2019, 41, 14–24. [Google Scholar] [CrossRef]
  59. Yan, D. (2023a). How ChatGPT’s automatic text generation impact on learners in a L2 writing practicum: An exploratory investigation.
  60. Yan, D. Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Educ. Inf. Technol. 2023, 28, 13943–13967. [Google Scholar] [CrossRef]
  61. Zhai, X. (2023). ChatGPT user experience: Implications for education. SSRN Electronic Journal.
  62. Zheng, H.; Zhan, H. ChatGPT in Scientific Writing: A Cautionary Tale. Am. J. Med. 2023, 136, 725–726. [Google Scholar] [CrossRef]
Figure 1. An example of the first draft of the student’s work.
Figure 1. An example of the first draft of the student’s work.
Preprints 106452 g001
Figure 2. An example of the student’s work in second draft.
Figure 2. An example of the student’s work in second draft.
Preprints 106452 g002
Table 1. Demographic characteristics of the participants.
Table 1. Demographic characteristics of the participants.
Characteristic Description Percentage (%)
Age Range (year old) 18-24: 26
25 -34: 6
35+: 0
81.25
18.75
0
Gender Male: 12
Female: 20
37.50
62.50
English Proficiency Beginner: 18
Intermediate: 14
Advanced: 0
56.25
43.75
0
Table 2. Questionnaire results from dimension of writing proficiency.
Table 2. Questionnaire results from dimension of writing proficiency.
Writing Proficiency (WP)
Code Items SD
(n, %)
D
(n, %)
N
(n, %)
A
(n, %)
SA
(n, %)
Std. Dev
WP.1 I find ChatGPT helpful in improving the clarity of my writing. 3
(9.38%)
5
(15.63%)
7
(21.88%)
10
(31.25%)
7
(21.88%)
0.92
WP. 2 ChatGPT assists me in organizing my ideas effectively. 4
(12.5%)
6
(18.75%)
5
(15.63%)
9
(28.13%)
8
(25%)
0.87
WP. 3 I can express complex ideas. More clearly with ChatGPT. 5
(15.63%)
4
(12.50%)
4
(12.50%)
10
(31.25%)
9
(28.13%)
0.91
WP. 4 ChatGPT helps me use academic vocabulary appropriately. 4
(12.5%)
5
(15.63%)
6
(18.75%)
7
(21.88%)
8
(25%)
0.92
WP. 5 My writing skills have improved with ChatGPT integration. 2
(6.25%)
4
(12.5%)
5
(15.63%)
12
(37.50%)
9
(28.13%)
0.94
WP. 6 ChatGPT enhances my ability to cite sources accurately. 5
(15.63%)
5
(15.63%)
6
(18.75%)
8
(25%)
8
(25%)
0.86
WP. 7 I can produce well-structured paragraphs with ChatGPT’s assistance. 3
(9.38%)
6
(18.75%)
7
(21.88%)
9
(28.13%)
7
(21.88%)
0.97
WP. 8 ChatGPT helps me maintain coherence in my writing. 4
(12.5%)
5
(15.63%)
3
(9.38%)
11
(34.38%)
9
(28.13%)
0.90
WP. 9 I rely on ChatGPT to identify and correct grammar. 2
(6.25%)
4
(12.5%)
6
(18.75%)
8
(25%)
12
(37.50%)
0.95
WP. 10 Feedback to improve my ChatGPT provides valuable writing proficiency. 3
(9.38%)
6
(18.75%)
6
(18.75%)
8
(25%)
9
(28.13%)
0.91
Table 3. Questionnaire results from dimension of engagement factors.
Table 3. Questionnaire results from dimension of engagement factors.
Emergency Factors
Code Items SD
(n, %)
D
(n, %)
N
(n, %)
A
(n, %)
SA
(n, %)
Std. Dev
EF.1 I feel motivated to write more frequently due to ChatGPT’s support. 3
(9.38%)
4
(12.5%)
5
(15.63%)
10
(31.25%)
10
(31.25%)
0.92
EF.2 Using ChatGPT makes writing more enjoyable for me. 1
(3.12%)
5
(15.63%)
3
(9.38%)
13
(40.62%)
10
(31.25%)
0.90
EF.3 I am more interested in exploring diverse topics with ChatGPT 3
(9.38%)
4
(12.5%)
4
(12.5%)
10
(31.25%)
11
(34.38%)
0.91
EF.4 ChatGPT boosts my confidence in writing effectively. 4
(12.5%)
5
(15.63%)
3
(9.38%)
10
(31.25%)
10
(31.25%)
0.87
EF.5 I appreciate the feedback and suggestions provided by ChatGPT. 3
(9.38%)
5
(15.63%)
4
(12.5%)
9
(28.13%)
11
(34.38%)
0.86
EF.6 ChatGPT encourages me to put more effort into my writing tasks. 3
(9.38%)
4
(12.5%)
4
(12.5%)
9
(28.13%)
12
(37.5%)
0.90
EF.7 I actively seek opportunities to utilize ChatGPT for writing practice. 4
(12.5%)
4
(12.5%)
3
(9.38%)
10
(31.25%)
11
(34.38%)
0.88
EF.8 ChatGPT helps me to stay focused and engaged during writing session. 3
(9.38%)
4
(12.5%)
4
(12.5%)
9
(28.13%)
12
(37.5%)
0.89
EF.9 I believe ChatGPT is crucial for improving my academic writing skills. 3
(9.38%)
4
(12.5%)
4
(12.5%)
9
(28.13%)
12
(37.5%)
0.88
EF.10 Setting goals to utilize ChatGPT effectively has improved my writing engagement. 4
(12.5%)
4
(12.5%)
3
(9.38%)
10
(31.25%)
11
(34.38%)
0.90
Table 4. Questionnaire results from dimension of attitudes towards technology.
Table 4. Questionnaire results from dimension of attitudes towards technology.
Attitudes Towards Technology (ATT)
Code Items SD
(n, %)
D
(n, %)
N
(n, %)
A
(n, %)
SA
(n, %)
Std. Dev
ATT.1 I am comfortable using ChatGPT for writing tasks. 3
(9.38%)
4
(12.5%)
6
(18.75%)
9
(28.13%)
10
(31.25%)
0.89
ATT.2 ChatGPT significantly enhances my writing skills. 4
(12.5%)
5
(15.63%)
7
(21.88%)
8
(25%)
8
(25%)
0.86
ATT.3 I prefer using ChatGPT over traditional methods for writing assistance. 3
(9.38%)
4
(12.5%)
6
(18.75%)
10
(31.250
9
(28.13%)
0.88
ATT.4 ChatGPT’s features are helpful and contribute to better writing outcomes. 4
(12.5%)
5
(15.63%)
6
(18.75%)
8
(25%)
9
(28.13%)
0.87
ATT.5 I am open to learning and utilizing new technologies like ChatGPT. 3
(9.38%)
4
(12.5%)
6
(18.75%)
10
(31.25%)
9
(28.13%)
0.88
ATT.6 I trust ChatGPT to provide accurate feedback. 4
(12.5%)
5
(15.63%)
7
(21.88%)
8
(25%)
8
(25%)
0.87
ATT.7 ChatGPT has made writing tasks easier and more efficient for me. 3
(9.38%)
4
(12.5%)
6
(18.75%)
10
(31.25%)
9
(28.13%)
0.89
ATT.8 I am confident in using ChatGPT for various writing purposes. 4
(12.5%)
5
(15.63%)
7
(21.88%)
8
(25%)
8
(25%)
0.86
ATT.9 Exploring ChatGPT’s features has positively impacted my writing skills. 4
(12.5%)
5
(15.63%)
6
(18.75%)
9
(28.13%)
8
(25%)
0.87
ATT.10 ChatGPT has improved my overall attitude towards using technology for writing. 3
(9.38%)
4
(12.5%)
6
(18.75%)
9
(28.13%)
10
(31.25%)
0.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated