The advent of generative artificial intelligence (AI) is revolutionizing numerous sectors, and higher education is no exception. Recent research led by the University of Surrey highlights how generative AI, including sophisticated chatbots like ChatGPT, is reshaping the mechanisms through which educators provide feedback to students. While these AI technologies offer unprecedented speed and scalability in generating responses, the study warns that without careful, principled application, the fundamental essence of meaningful learning may be jeopardized.
The core challenge lies in the intrinsic qualities of effective feedback — qualities that extend far beyond mere comment generation. Feedback, at its heart, hinges on human judgment, nuanced understanding, and intricate relational dynamics. These elements cultivate a fertile ground for student reflection, engagement, and growth. The researchers stress that AI’s rapid-fire feedback capabilities cannot replicate the essential empathy and contextual awareness that human educators inherently provide. Therefore, the influx of AI-driven feedback tools must be tempered with a “care-full” approach that views feedback not simply as corrective remarks but as a dynamic, dialogical process.
This research critiques the prevailing trend toward a transactional view of feedback, where feedback is perceived as a unidirectional delivery of information from educator to student. Such a perspective risks stripping feedback of its transformative power. Decades of pedagogical research indicate that feedback is most effective when students actively interact with it, internalize its insights, and apply these insights iteratively for continual improvement. AI, if employed without fostering this iterative engagement, could inadvertently push education backward by reinforcing a simplistic “giving” model rather than nurturing reciprocal learning relationships.
Importantly, the study reveals that students often demonstrate greater trust in human-generated feedback compared to AI-produced comments. This discrepancy is rooted in human feedback’s richer contextual relevance, empathetic tone, and adaptability to individual learner needs — factors that AI algorithms currently cannot emulate convincingly. Consequently, feedback sourced from educators is more likely to incite meaningful action, whereas AI feedback might be met with skepticism or passive reception.
Nevertheless, the researchers acknowledge that AI has the potential to complement traditional feedback modalities, particularly by lowering affective barriers. For some learners, interacting with AI-generated feedback reduces anxiety, enabling exploratory learning without fear of judgement. This psychological safety can foster a space for students to experiment with ideas and questions that they might hesitate to raise in human interactions. However, the team also cautions against the over-reliance on AI feedback, as this could diminish rich human-to-human educational interactions and potentiate inequalities, whereby certain student demographics may disproportionately benefit from AI support.
Fundamental to this study is an international manifesto articulated by the research team, which delineates ten guiding principles for navigating feedback in the era of generative AI. These principles advocate for feedback to be understood as a continual, relational, and ethically grounded practice that embraces complexity and acknowledges emotional and cognitive challenges. The manifesto insists on prioritizing learning and human connection over mere technological expediency, urging feedback systems to be developed through inclusive dialogues involving both learners and educators.
Professor Naomi Winstone, a leading voice in educational psychology and the study’s principal investigator, articulates a critical reflection: “The pivotal question isn’t what AI can do, but what it should do.” This underscores a paradigm shift from technological possibility toward pedagogical responsibility. In practical terms, this means designing feedback strategies that embed care, trust, and relationship-building as foundational pillars, rather than focusing exclusively on accelerating delivery or maximizing volume.
Technological integration in education, particularly regarding generative AI tools, demands continuous evaluation and adaptation. The research emphasizes that any deployment of AI-driven feedback must be accompanied by an ongoing commitment to “care-full” practices — a concept that foregrounds ethical considerations, equitable access, and professional expertise. Such a philosophy challenges institutions to resist quick-fix solutions based solely on efficiency metrics and instead nurture feedback ecosystems that honor the lived realities and diverse trajectories of learners.
This holistic feedback approach acknowledges that feedback processes are inherently “messy.” They may evoke discomfort, challenge student self-perceptions, and simultaneously be a source of intellectual joy. Generative AI, when wielded with sensitivity, can support this dynamic complexity rather than reducing feedback exchanges to sterile transactions. However, care must be taken to calibrate the role of AI so that it amplifies rather than undermines the authenticity and relational depth of feedback dialogues.
Furthermore, the research highlights the technologically mediated feedback landscape’s limitations. Although digital tools can enrich feedback by facilitating novel forms of interaction and providing rapid responses, digital enhancements do not inherently equate to better feedback. More feedback isn’t always better, and the quality and timing of feedback engagement trump sheer quantity or technological novelty. The study calls for a measured balance that considers pedagogical efficacy alongside technological capacity.
An additional concern underlined in the study involves the potential exacerbation of educational inequalities in the AI-feedback era. Some students may find AI-generated feedback more accessible or less intimidating, while others might lack the digital literacy or resources to benefit equally. These disparities necessitate vigilance from educators and policymakers, making inclusivity and equity non-negotiable components of any AI integration strategy.
In sum, the University of Surrey-led study offers a nuanced and forward-looking framework for incorporating generative AI into higher education feedback practices. It positions AI as a powerful yet imperfect tool that must be embedded within an ethical, relational pedagogical framework to safeguard meaningful learning. The research calls for a collaborative reimagining of feedback that transcends algorithmic efficiency to embrace human connection, reflection, and growth.
As we stand at the crossroads of technological innovation and educational tradition, the imperative is clear: success in harnessing generative AI for feedback will come not from replacing educators but from augmenting their capacity with tools designed to respect and enhance the fundamentally human craft of teaching and learning.
Subject of Research: The impact of generative AI on feedback practices in higher education and the preservation of meaningful learning through ethical, relational pedagogical principles.
Article Title: The care-full craft of feedback in an age of generative AI
News Publication Date: 18-Mar-2026
Web References:
https://www.tandfonline.com/doi/ref/10.1080/02602938.2026.2643333?scroll=top
http://dx.doi.org/10.1080/02602938.2026.2643333
Keywords: Generative AI, Higher Education, Feedback, Educational Psychology, Pedagogy, Ethics in Education, AI in Learning, Student Engagement, Digital Inequality, ChatGPT, Educational Technology, Care-full Feedback

