Tuesday, July 15, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Science Education

Researchers Reveal Divergent Perspectives of Developers and Educators on AI Harms

May 14, 2025
in Science Education
Reading Time: 4 mins read
0
65
SHARES
592
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

In recent years, the integration of large language models (LLMs) into K-12 educational settings has surged dramatically, transforming traditional pedagogical practices through the advent of tools like ChatGPT. These AI-powered systems are increasingly employed to assist with lesson planning, provide personalized tutoring, and support classroom management tasks. Despite their growing foothold, the implications of these technologies remain under-explored, especially as educators and developers often hold divergent views on the benefits and potential harms associated with their use.

A groundbreaking study conducted by researchers at Cornell University delves into this critical gap, revealing a disconnect between the perspectives of developers who create education technology (edtech) tools and the educators tasked with implementing them in their classrooms. The research underscores the necessity for a more educator-centered approach in edtech development, emphasizing that these tools must be designed with direct input from the teachers who ultimately use them.

This interdisciplinary investigation, led by doctoral student Emma Harvey and her colleagues Allison Koenecke and Rene Kizilcec, went beyond conventional technical assessments of LLMs to explore the sociotechnical harms and broader ecosystem effects. Presented at the ACM Conference on Human Factors in Computing Systems (CHI) and awarded Best Paper, the study sheds light on challenges rarely addressed in machine learning circles, such as the erosion of critical thinking, inequities in access, and increased workloads for educators.

ADVERTISEMENT

The researchers conducted qualitative interviews with six edtech company representatives and approximately two dozen educators to bracket these contrasting viewpoints. Developers, often entrenched in solving technical problems like preventing algorithmic hallucinations, safeguarding privacy, and mitigating toxic outputs, focused their efforts on fine-tuning the underlying AI technology. By contrast, educators prioritized broader concerns, including the effect of AI tools on students’ cognitive development, social skills, structural inequalities in resource allocation, and the shifting dynamics of teacher responsibilities.

Educators voiced apprehension that reliance on AI-powered answers might stifle students’ capacity for independent critical analysis and reasoning. One teacher noted, “I’ve noticed that as students become more tech aware, they also tend to lose that critical thinking skill, because they can just ask for answers.” This phenomenon highlights intrinsic risks extending beyond the scope of algorithmic accuracy or bias.

Moreover, systemic inequities surfaced prominently in educators’ reflections. Schools in underprivileged districts may struggle to afford subscriptions or licenses for AI edtech, inadvertently worsening educational disparities. Some educators expressed concerns that district budgets might be reallocated to purchase AI tools at the expense of other crucial resources, undermining equity and comprehensive educational support.

Another dimension of concern is the increased workload burden on teachers. Rather than alleviating pressure, the integration of AI often requires educators to spend additional time vetting AI outputs, managing new technological interfaces, and compensating for deficiencies in current AI systems. This workload amplification runs counter to initial promises of efficiency and support.

To address this multifaceted landscape of challenges, the research team proposes a paradigm shift in edtech design that centers educators’ agency and expertise. Among their primary recommendations is the development of tools that empower teachers to actively question, correct, and contextualize AI-generated content. Such features would not only mitigate hallucinations but also integrate the educators’ pedagogical judgment into the AI-augmented learning process.

The study further advocates for the establishment of independent, centralized regulatory bodies to evaluate the efficacy and ethical impact of LLM-based educational tools. Clear, consistent, and authoritative oversight could guide schools and districts in making informed adoption decisions while ensuring transparency and accountability in edtech deployment.

Customization emerged as another critical aspect, inviting researchers and developers to create adaptable AI tools tailored to the diverse needs and preferences of different educational contexts. Flexibility would enable educators to modulate AI functionalities to align with curricular goals, student demographics, and classroom dynamics, thereby enhancing practical usability and pedagogical fit.

Furthermore, the evidence calls for prioritizing educators’ voices in adoption decisions at the district level, recognizing their frontline role in shaping student experience. Equally important is safeguarding teachers’ autonomy by ensuring they are not penalized for opting out of using AI systems that may not suit their instructional philosophy or classroom environment.

Emma Harvey emphasized that while developers concentrate heavily on minimizing technical failures such as hallucinations, equipping educators with mechanisms to intervene and rectify inaccuracies during instruction could facilitate more effective harm mitigation. “This approach frees up capacity to address broader sociotechnical harms that are less tangible but no less consequential,” she explained.

Coauthor Allison Koenecke echoed the sentiment, highlighting that social and societal harms—such as exacerbating inequities, diminishing critical thinking, and altering teacher-student interactions—require rigorous, interdisciplinary scrutiny. These “higher-stakes, difficult-to-measure” effects of LLM deployment remain largely marginalized within standard machine learning evaluation frameworks.

The research represents a pivotal contribution to the evolving dialogue on AI ethics and education technology. By illuminating the divergent priorities between developers and educators, it paves the way for collaborative innovation that respects both technological advancement and educational integrity. The team hopes their findings catalyze ongoing conversations among policymakers, school leaders, and technologists to co-create responsible, equitable, and effective AI tools for future classrooms.

Funded by the Schmidt Futures Foundation and the National Science Foundation, this research not only advances the scientific understanding of AI’s role in education but also champions an inclusive model wherein those at the heart of teaching have a decisive voice in shaping the digital tools they use. As LLMs become increasingly woven into educational infrastructures worldwide, aligning technology’s promise with pedagogical realities is essential to harness AI’s potential without compromising foundational educational values.

—

Subject of Research: The sociotechnical harms and educator-centered design considerations of large language models (LLMs) in K-12 education technology.

Article Title: ‘Don’t Forget the Teachers’: Towards an Educator-Centered Understanding of Harms from Large Language Models in Education.

News Publication Date: April 28, 2024

Web References:
https://dl.acm.org/doi/full/10.1145/3706598.3713210
http://dx.doi.org/10.1145/3706598.3713210

References:
Harvey, E., Koenecke, A., & Kizilcec, R. (2024). ‘Don’t Forget the Teachers’: Towards an Educator-Centered Understanding of Harms from Large Language Models in Education. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), Yokohama, Japan.

Keywords: Large Language Models, Education Technology, AI Ethics, Sociotechnical Harms, K-12 Education, Critical Thinking, Educational Equity, AI Customization, Teacher Workload, AI Regulation, AI in Classrooms, Pedagogical Integrity

Tags: AI in K-12 educationbest practices for integrating AI in educationchallenges in edtech developmentCornell University AI researchdivergent views on education technologyeducator-centered edtech designeducators vs developers perspectivesimpacts of AI on classroom managementimplications of large language modelsinterdisciplinary studies in education technologypersonalized tutoring with AIsociotechnical harms of AI
Share26Tweet16
Previous Post

Which Behavioral Strategies Drive Environmental Action?

Next Post

Microscopic Gas Bubbles Uncover Hidden Secrets of Hawaiian Volcanoes

Related Posts

blank
Science Education

Public-Private Partnerships Combat Tuberculosis: Challenges, Opportunities

July 4, 2025
blank
Science Education

HIV/AIDS Risk Among Migrants in Morocco Examined

July 4, 2025
blank
Science Education

Insights on Community Health Workers in Breast Cancer Education

July 4, 2025
blank
Science Education

Innovative Collaboration Ventures into AI Advancements in Higher Education

July 3, 2025
Research Team
Science Education

HKUMed Advocates Safe Lithium Use and Enhances Public Education for Effective Bipolar Disorder Management

July 3, 2025
blank
Science Education

Fragmented Care Worsens Breast, Cervical Cancer Outcomes

July 3, 2025
Next Post
blank

Microscopic Gas Bubbles Uncover Hidden Secrets of Hawaiian Volcanoes

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27523 shares
    Share 11006 Tweet 6879
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    806 shares
    Share 322 Tweet 202
  • Bee body mass, pathogens and local climate influence heat tolerance

    639 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    505 shares
    Share 202 Tweet 126
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    308 shares
    Share 123 Tweet 77
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Renewable vs Non-Renewable Energy Impact on GCC Growth
  • Flowering Plant Gene Regulation: Recruitment, Rewiring, Conservation
  • Innovation Networks and Externalities in China’s Cities
  • Triggering Bacterial Calcification to Combat MRSA

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,188 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading