FINAL
April 1, 2026
University of Bath Research Warns AI Could Undermine Human Capital, Critical Thinking, and Expertise in the Workplace
A groundbreaking study emerging from the University of Bath’s School of Management is sounding a cautionary note on the unbridled enthusiasm surrounding artificial intelligence (AI) integration in modern workplaces. While AI promises to enhance efficiency and accelerate problem-solving, this research elucidates the subtle yet profound risks it poses to the development and preservation of critical thinking, creativity, and diverse forms of human expertise that constitute valuable organizational capital.
Professor Dirk Lindebaum, the lead author of the study titled On the Dangers of Large-Language Model Mediated Learning for Human Capital, articulates that AI’s celebrated benefits must be scrutinized beyond surface-level efficiency gains. The study emphasizes that human knowledge is multifaceted and varies in its compatibility with AI technologies. This nuanced understanding is pivotal for HR professionals and people managers tasked with cultivating and safeguarding workforce capabilities.
The research delineates two categories of knowledge that exhibit a degree of compatibility with AI systems. Encoded knowledge, which includes formal rules, policies, procedures, and structured data sets, is relatively amenable to AI support. Similarly, embedded knowledge, consisting of digitized workflows and routine processes, can also benefit from AI’s automation and real-time adjustment capabilities. In these realms, AI can serve effectively by updating documents, supporting compliance efforts, and optimizing standardized processes.
However, the study warns that overreliance on AI for encoded and embedded knowledge-related tasks is not without its caveats. An unintended consequence is the erosion of employee familiarity and expertise if direct engagement with critical processes diminishes. This phenomenon threatens to diminish tacit knowledge, a form of understanding that is foundational to intuitive decision-making and adaptive problem-solving.
More alarmingly, the research highlights three vital knowledge types that AI currently fails to support and may actively degrade if neglected: embodied knowledge, encultured knowledge, and embrained knowledge. Embodied knowledge accrues through hands-on, practical experience – learning that integrates sensory feedback and physical interaction, which AI cannot replicate. Encultured knowledge arises from socialization within the distinct cultural milieus of organizations, shaped by shared values, norms, and interpersonal interactions. Embrained knowledge encompasses advanced analytical judgment and nuanced problem-solving abilities that require flexible thinking and contextual awareness.
Professor Lindebaum underscores the irreplaceability of these knowledge forms, noting that AI-mediated learning through large-language models or synthetic environments lacks authenticity and depth. The dependence on AI for interpretative thinking, decision-making, or contextual understanding could foster atrophy in these critical cognitive faculties, thereby creating systemic vulnerabilities within organizations that rely too heavily on automated processes.
To mitigate these risks, the team calls on human resource professionals to intentionally design workplace experiences that enable continued direct learning and robust human interaction. Strategies such as shadowing, mentoring, and immersive, real-world engagement are essential to maintaining the vitality of embodied and encultured knowledge. Moreover, onboarding programs and cross-cultural exchanges are imperative to deepen organizational socialization and cultural understanding.
Crucially, the researchers advocate for a renewed emphasis on fostering critical thinking and reflective practices as core competencies within teams and organizations. This human-centric skill set is vital to counterbalance the shortcuts AI may introduce and to ensure that decision-making remains deliberative and nuanced rather than mechanistic and shallow.
Perhaps the most innovative recommendation from the University of Bath team is the concept of ‘learning vaults’—protected intellectual environments within workplaces and educational institutions that actively shield learning processes from the homogenizing influence of AI automation. Drawing an analogy to the Svalbard Seed Vault’s role in preserving biodiversity, these ‘learning vaults’ would protect the experiential, reflective, and creative dimensions of knowledge essential for sustained human capital development.
In practice, these vaults would cultivate social learning ecosystems where employees and students collaboratively engage with fundamental questions about their work: the know-why (the underlying reasons behind success or failure), know-how (the procedural and contextual knowledge), and know-what (the implications and consequences of decisions and outcomes). Such environments would reinforce foundational knowledge and critical reflection before cognitive offloading to AI systems weakens these vital faculties.
The study’s insights deliver a stark but timely message. AI, while a powerful augmentation tool, is not a panacea for human learning and expertise development. Business leaders and educational institutions face a strategic imperative to embed these learning vaults and safeguard diverse knowledge forms against the seductive convenience of automated solutions.
Professor Lindebaum concludes by urging employers and business schools to innovate in how learning vaults can be seamlessly integrated into organizational cultures and role designs. Without deliberate action to protect and cultivate authentic human capital, the widespread adoption of AI tools risks inadvertently hollowing out the very intellectual and cultural foundations that underpin sustained organizational success.
This research emerges amid a landscape of rapid AI adoption, underscoring that safeguarding human creativity, judgment, and critical analysis requires intentional policy choices and thoughtful design of learning environments. The University of Bath’s contribution stands as a clarion call, reminding us that in our race towards automation, the preservation of human wisdom remains paramount.
Subject of Research: People
Article Title: On the Dangers of Large-Language Model Mediated Learning for Human Capital
News Publication Date: April 1, 2026
Web References:
References:
Lindebaum, D., et al. (2026). On the Dangers of Large-Language Model Mediated Learning for Human Capital. Human Resource Management Journal. DOI: 10.1111/1748-8583.70036
Keywords:
Artificial intelligence, human capital, workplace learning, large-language models, critical thinking, embodied knowledge, organizational culture, mentorship, learning vaults

