The advent of generative artificial intelligence (AI) has unleashed transformative shifts across multiple domains, with education standing at the frontier of this technological revolution. Since its public launch in late 2022, ChatGPT and similar AI-based chatbots have become ubiquitous tools, especially in fields involving programming and computer science. These advanced language models, capable of generating human-like code snippets and debugging assistance, have captivated students and educators alike, raising profound questions about their impact on learning outcomes. A groundbreaking research article spearheaded by Marina Lepp, Associate Professor of Informatics at the University of Tartu Institute of Computer Science, alongside co-author Joosep Kaimre, a recent master’s graduate, provides pivotal insights into these dynamics by examining the role AI plays in the learning trajectories of programming students.
This comprehensive study surveyed 231 students enrolled in an object-oriented programming course to assess how frequently and for what purposes AI tools were employed during their studies. Utilizing robust statistical methods, including descriptive analytics and Spearman’s rank correlation, the researchers dissected the nuanced relationships between AI usage patterns and academic performance. Their findings paint a multifaceted portrait of AI’s position in contemporary education, underscoring both its potent utility and potential pitfalls when applied without adequate guidance.
At the core of the investigation was the observation that the majority of students leveraged AI primarily for pragmatic problem-solving tasks. These included debugging code, clarifying complex programming concepts, and generating examples to aid comprehension. The convenience and immediacy offered by AI tools such as ChatGPT have evidently allowed learners to navigate technical challenges with unprecedented ease. This mirrors the broader perception of AI as a quintessential assistant that streamlines workflow and accelerates knowledge acquisition, particularly in disciplines heavily reliant on algorithmic logic and coding syntax.
However, the researchers unearthed a counterintuitive and somewhat disconcerting correlation: students who reported more frequent use of AI chatbots tended to have lower overall academic results. While the causality remains an open question, one plausible interpretation is that students struggling with the material may resort to AI as a compensatory mechanism, seeking quick fixes rather than engaging deeply with the content. This behavioral pattern raises concerns about over-reliance on AI potentially undermining foundational learning processes, thereby hindering the development of independent problem-solving skills and critical thinking.
Professor Lepp highlights the imperative of positioning AI as a supportive adjunct rather than a wholesale replacement for learning. “AI must support learning, not replace it,” she asserts, emphasizing that educators and institutions bear the responsibility to guide students in harnessing AI tools effectively. Without such scaffolding, there is a risk that learners might fall into passive consumption of AI-generated answers, thereby stunting cognitive growth and mastery of fundamental programming principles essential for long-term competence.
Intriguingly, beyond mere problem-solving, many students demonstrated creative applications of AI. For example, some used these tools to facilitate code translation between programming languages, such as converting Python scripts into Java. This innovative use case exemplifies how generative AI can serve as a bridge for acquiring new technical skills, providing learners with tailored, context-specific scaffolds that promote cross-linguistic fluency in coding. Such versatility highlights AI’s potential to transcend rote assistance and contribute meaningfully to skill diversification.
These dual facets of AI usage—both enabling and potentially detrimental—underscore the need for deliberate integration strategies within computer science curricula. The study suggests that educators ought to embed structured frameworks that encourage reflective and guided AI engagement. This may involve pedagogical designs that prompt students to critique AI-generated outputs, verify their correctness, and iteratively improve upon them rather than accepting them at face value.
Furthermore, the quantifiable linkage between AI usage intensity and academic performance elucidates the nuanced interplay between technology and human cognition. It invites a reconsideration of how emerging AI tools can either complement or conflict with existing learning theories, such as constructivism, which emphasizes active knowledge construction over passive reception. When AI supplants the cognitive labor that is critical for internalizing concepts, the learning process risks superficiality.
As generative AI continues to evolve, its integration into educational ecosystems will likely become more sophisticated, necessitating ongoing research. Future investigations might explore longitudinal effects of AI tool use, differentiation across various programming proficiencies, and the impact of AI on motivation and metacognitive skills. Understanding these dimensions can inform evidence-based guidelines to maximize AI’s educational benefits while mitigating drawbacks.
The work emerging from the University of Tartu serves as a clarion call for a balanced embrace of AI technologies—not as infallible authorities but as powerful instruments that require human judgment and oversight. This philosophical stance can help shape policies and teaching methodologies that exploit AI’s capacity for personalized learning, immediate feedback, and cross-disciplinary exploration without sacrificing intellectual rigor.
More broadly, this research resonates with the global educational community grappling with the rapid infiltration of AI into classrooms. As AI-generated content blurs lines between original work and automated assistance, defining ethical standards and academic integrity norms becomes paramount. The interplay of technology, pedagogy, and ethics will form the crucible in which the future of AI-enhanced education is forged.
In summary, while the rapid rise of ChatGPT and generative AI has revolutionized the accessibility of complex programming tasks, their unregulated and unguided use can inadvertently stunt learners’ progression. The study by Marina Lepp and colleagues offers a critical lens through which educators and policymakers must view AI’s role—not as a panacea but as a carefully calibrated learning companion. Through strategic incorporation and deliberate oversight, AI can transform from a controversial novel gadget into an indispensable asset that propels students towards deeper, more durable understanding in computer science and beyond.
Subject of Research: People
Article Title: Does generative AI help in learning programming: Students’ perceptions, reported use and relation to performance
News Publication Date: 1-May-2025
Web References: http://dx.doi.org/10.1016/j.chbr.2025.100642
Image Credits: Author: Alari Tammsalu
Keywords: Artificial Intelligence, ChatGPT, Generative AI, Education Technology, Programming Education, Computer Science, Learning Outcomes, AI-Assisted Learning, Code Debugging, Student Performance, Educational Research, AI Ethics