In an era dominated by rapid digital transformation, the integration of artificial intelligence tools within education is becoming increasingly pivotal. One such tool, Grammarly, has garnered widespread attention for its capacity to assist in foreign language writing by providing automated feedback. Recent research employing structural equation modeling sheds new light on the factors influencing higher education students’ acceptance and use of Grammarly, grounded in the well-established Unified Theory of Acceptance and Use of Technology (UTAUT). This pioneering approach not only confirms traditional technology acceptance constructs but extends the theory by incorporating novel perceptual and systemic predictors tailored to the unique context of automated writing evaluation.
At the core of the study lies a comprehensive model connecting students’ perceptions—particularly performance expectancy and effort expectancy—to their behavioral intentions when engaging with Grammarly. Performance expectancy, reflecting users’ beliefs that Grammarly will improve their writing, and effort expectancy, denoting the perceived ease of using the platform, emerged as significant predictors shaping students’ intentions to utilize the tool. These findings reinforce fundamental UTAUT postulates but frame them with additional layers specific to automated language writing assistance, underscoring how students weigh anticipated benefits and required efforts in adopting such technologies.
Beyond intentions, the transition from intention to actual usage is governed by facilitating conditions—the external environment and organizational support available to students—and their initial willingness to engage with the platform. Facilitating conditions include both technical and instructional support structures that ease the integration of Grammarly into daily academic workflows. The study affirms that the existence of such conditions, coupled with positive behavioral intentions, robustly predicts the frequency and depth of tool usage. This nexus underscores the importance of institutional and infrastructural readiness in fostering successful technology adoption in educational settings.
A novel contribution of the research lies in identifying external factors that intricately shape students’ expectancy beliefs. Foremost among them, trust in feedback produced by Grammarly emerged as a crucial determinant of performance expectancy. Trust encapsulates students’ confidence in the accuracy, relevance, and fairness of Grammarly’s automated suggestions—a vital element given the nuanced and often subjective nature of language evaluation. Simultaneously, peer influence was found to significantly impact both performance and effort expectancy, revealing that social environments and peer behaviors play vital roles in shaping individual attitudes towards the technology.
The study also spotlights perceived interactivity and personal investment as key influencers of effort expectancy. Perceived interactivity indicates the degree to which the feedback environment is responsive and engaging, a characteristic increasingly valued in mediated educational tools. Personal investment, referring to the cognitive and emotional resources students dedicate to mastering the platform, likewise elevates the anticipated ease of use. Together, these variables emphasize a dynamic interplay between subjective experiences and motivation that underpin the adoption process.
Interestingly, while peer influence boosted expectancy beliefs, it did not significantly affect facilitating conditions. Instead, willingness for e-learning and the availability of instructional support emerged as determinant contributors to suitability and readiness for Grammarly’s integration. This distinction unveils the layered nature of acceptance factors: social dynamics may shape beliefs and perceptions, but systemic structures and individual predispositions underpin practical access and engagement conditions.
Students’ feedback regarding Grammarly revealed a nuanced landscape of perceptions. The dominant advantages were recognized in promptness and accuracy of feedback, with users valuing immediate and precise responses that aid timely revisions. Nonetheless, substantive concerns were raised about inaccuracies in specific contexts and the platform’s high cost, illuminating critical barriers to widespread acceptance. Such feedback points to the ongoing imperative for technological refinement, affordability, and context sensitivity, particularly as artificial intelligence assumes central roles in automated writing assessment.
The forward-looking voices of the participants advocated for deeper incorporation of artificial intelligence capabilities to further elevate the accuracy and quality of automated feedback. This aligns with broader trends in AI-enhanced education, where adaptive learning systems and intelligent tutoring are envisaged to provide personalized, context-aware support. Enhancing Grammarly through sophisticated AI algorithms could not only mitigate current limitations but also foster higher trust and engagement among learners.
Despite its contributions, the study recognizes certain limitations inherent in its design and scope. The explanatory power of the model, while statistically significant, accounted for only 20% to 50% of outcome variance, indicating room for integrating additional variables. The intricate nature of technology acceptance and educational behaviors suggests that multifaceted, possibly latent factors may influence adoption beyond those currently modeled. The researchers call for further investigation to unpack complex interrelationships among variables, thereby enriching theoretical frameworks and predictive robustness.
Another limitation arises from the demographic scope, restricted to higher education students without comprehensive details on participants’ affiliations or academic majors. While the sample’s geographic representativeness was verified through collected data sources, broader generalization to other educational levels or technological contexts remains tentative. Future research would benefit from more diverse, balanced populations encompassing varying genders, ages, academic disciplines, and regions to inform tailored interventions and technology designs.
The practical implications of this research resonate deeply with educators, instructional designers, and technology developers. Teachers are urged to actively guide students in utilizing automated writing evaluation tools like Grammarly, highlighting that effective implementation extends beyond providing access. Digital collaborative learning paradigms increasingly rely on feedback literacy—a multifaceted competency covering openness to feedback, active engagement, and constructive enactment. This study underscores the role of trust in feedback quality but also signals the need for comprehensive frameworks encompassing feedback seeking, sense-making, emotional regulation, and utilization to fully empower learners.
Moreover, the delicate influence of peer dynamics calls for educators’ awareness of social pressures and modeling behaviors impacting technology acceptance. Instructional strategies can leverage peer influence positively while mitigating potential adverse effects such as anxiety or resistance. Coordinated efforts among teachers, technology providers, and institutional support systems are imperative to create enabling environments that nurture students’ willingness and capacity to engage meaningfully with automated evaluation tools.
The research further stresses the critical importance of integrating teacher, peer, and automated feedback within coherent pedagogical frameworks. Such integrative approaches promise synergistic benefits by combining human insight with AI precision, enhancing both engagement and writing quality. As AI-driven tools evolve, their design must foreground interactive features, accuracy, reliability, and clear instructional guidance. Aligning platform enhancements with user needs and pedagogical contexts will be essential to maximizing adoption and educational impact across all learning stages.
Theoretically, this investigation enriches technology acceptance literature by introducing novel external predictors to the UTAUT framework, notably trust in feedback, peer influence, perceived interactivity, personal investment, willingness for e-learning, and instructional support. These additions tailor classical models to emerging educational technologies, offering a blueprint for future studies exploring nuanced factors within specific domains. This contributes to an evolving understanding of how automated writing evaluation tools intersect with psychological and systemic elements influencing user behavior.
Furthermore, the study sets a methodological precedent by employing rigorous structural equation modeling combined with novel measurement instruments adapted to capture domain-specific variables. Such an approach encourages replication and extension across various educational settings, writing tasks, and linguistic focus areas—from vocabulary to grammar and organization—thus broadening the applicability and granularity of technology acceptance research. Subsequent investigations might incorporate advanced analytical techniques including grounded theory, fuzzy set qualitative comparative analysis, or bibliometric reviews to uncover latent predictors and effectiveness indicators.
In the rapidly transforming landscape of digital language education, this research stands as a significant milestone. It bridges gaps by clarifying the psychological mechanisms underpinning students’ interactions with automated writing evaluation platforms, offering actionable insights for developers, educators, and policymakers. By illuminating the multifarious perceptions and systemic factors that shape adoption, it paves the way toward more personalized, effective, and equitable language learning experiences empowered by artificial intelligence.
As the demand for objective, immediate, and high-quality feedback grows, tools like Grammarly are positioned to redefine writing instruction paradigms. However, realizing their full potential necessitates continuous dialogue among stakeholders and iterative refinements informed by empirical evidence. This study exemplifies such an integrative endeavor, advancing the discourse on techno-pedagogical innovation while emphasizing the inseparability of human agency and technological affordances in shaping future education.
Subject of Research: University students’ acceptance and use of Grammarly as an automated writing evaluation tool, examining perceptual and systemic predictors within the UTAUT framework.
Article Title: Elucidating university students’ intentions to seek automated writing feedback from Grammarly: toward perceptual and systemic predictors.
Article References:
Lin, Y., Yu, Z. Elucidating university students’ intentions to seek automated writing feedback from Grammarly: toward perceptual and systemic predictors. Humanit Soc Sci Commun 12, 7 (2025). https://doi.org/10.1057/s41599-024-03861-1
Image Credits: AI Generated