AI chatbots, powered by large language models (LLMs), are rapidly reshaping the ways in which humans communicate, think, and express ideas. While these advancements offer remarkable convenience and efficiency, a growing body of researchers warns of an unintended yet profound consequence: the homogenization of human expression. As billions of people increasingly rely on a limited number of AI systems for writing, reasoning, and problem-solving, there is mounting concern that this digital standardization may erode the cognitive diversity that underpins creativity and innovation.
Zhivar Sourati, a computer scientist at the University of Southern California and lead author of a new commentary published in the esteemed journal Trends in Cognitive Sciences, explores this phenomenon in depth. Sourati cautions that the subtle uniformity introduced by widely adopted LLMs is not merely superficial but extends to how individuals conceptualize and reason about the world. “People are inherently diverse in how they approach language and thought,” Sourati explains. “When these individual styles are funneled through the same AI algorithms, we begin to see convergence in expressions, perspectives, and problem-solving strategies.”
This convergence challenges the rich tapestry of human cognitive diversity that is essential to societal advancement. Historically, groups and cultures have thrived precisely because of their varied ways of thinking—offering unique solutions and novel ideas to complex challenges. However, the ubiquitous influence of LLM-based chatbots, heavily trained on datasets dominated by Western, educated, industrialized, rich, and democratic (WEIRD) populations, risks narrowing this diversity. The training data’s inherent skew leads LLMs to produce outputs that echo a limited slice of human experience and epistemology.
More troublingly, the drive for polished, AI-assisted writing is gradually eroding individual voice and ownership over creative output. People using chatbots to refine text often find their distinctive linguistic fingerprints diluted, resulting in homogenized content that prioritizes statistical norms over personal flair. This can reduce users’ perceived creativity and diminish the motivational drive to generate original work. The broader implication is a reshaping of what is accepted as credible or authoritative language, altering social norms around communication and reasoning.
Recent empirical studies reinforce these concerns by showing that LLM-generated outputs lack the variability characteristic of human writing. Moreover, experiments examining group creativity indicate that when collective ideation processes rely on AI intermediaries, the total number and novelty of ideas diminish compared to groups collaborating without AI mediation. This suggests that LLMs may be unintentionally suppressing the collaborative cognitive dynamics critical for breakthroughs and diverse problem-solving.
Sourati further points to the social consequences of this homogenization. Even individuals who do not directly engage with AI tools feel indirect pressures to align their expression with the prevailing AI-mediated norms. This phenomenon can foster conformity, as people perceive AI-generated styles as socially acceptable or intellectually credible, making deviation seem less desirable or even inexplicable. Over time, this could solidify new conventional paradigms of thought and communication, potentially stifling dissent and innovation.
Beyond linguistic effects, the influence of LLMs on reasoning styles is equally significant. Current models emphasize linear “chain-of-thought” reasoning, which entails explicit stepwise logic. While effective for certain tasks, this approach underrepresents other cognitive styles such as intuitive and abstract reasoning, which are sometimes better suited for complex or novel problem-solving. Such an emphasis on linearity risks narrowing the cognitive toolkit available to users, subtly guiding their mental processes and expectations toward a constrained reasoning framework.
The gradual transfer of cognitive agency from users to models represents another critical concern. Users interacting with AI-generated suggestions frequently accept “good enough” continuations rather than actively crafting their own responses. This behavioral shift results in diminished intellectual engagement and adaptation, as the generative power of the model supersedes personal ideation. This transition marks a fundamental change in human-AI interaction dynamics, with profound implications for how future generations will think and create.
Addressing these challenges requires deliberate actions from AI developers. The commentary advocates for incorporating authentic, global diversity into LLM training datasets—not merely random noise but a comprehensive representation of the vast cultural, linguistic, and cognitive plurality present in humanity. Such inclusiveness would foster models capable of supporting a wider array of reasoning styles, linguistic expressions, and perspectives, thereby preserving the cognitive heterogeneity vital for innovation and societal resilience.
Sourati emphasizes that enhancing diversity in AI models must be paired with shifts in human interaction paradigms. Users should be encouraged to engage critically and creatively with AI outputs instead of passively adopting them. Educational initiatives and interface designs must promote active exploration, allowing human cognitive capacities to flourish alongside and in cooperation with AI technologies.
Ultimately, reinforcing cognitive diversity through the thoughtful integration of heterogeneous data sources and user practices promises to enrich collective intelligence. Diverse AI systems can function not as homogenizers but as amplifiers of human potential, enabling societies to confront increasingly complex and multifaceted challenges with a broader spectrum of ideas and approaches. This vision of AI as a collaborator rather than a standardizer highlights the urgent need for conscious stewardship in AI development and deployment.
The research underlines an urgent philosophical and practical crossroads: will AI technologies become agents of uniformity, compressing the vast spectrum of human expression into monolithic norms? Or will they serve as catalysts for expanding cognitive frontiers, amplifying the distinctiveness and creativity that define humanity’s past and future? The answer lies in recognizing and addressing the biases embedded in AI and reaffirming the fundamental role of diversity in intellectual and cultural evolution.
In summary, as AI chatbots and LLMs become pervasive tools in everyday life, understanding and mitigating their homogenizing effects is paramount. The ongoing technological revolution must be accompanied by rigorous efforts to embed diverse linguistic, cultural, and reasoning traditions into AI systems. By doing so, we can safeguard the diversity of human thought and maintain the fertile ground upon which creativity, innovation, and societal progress depend.
Subject of Research:
Not applicable
Article Title:
The homogenizing effect of large language models on human expression and thought
News Publication Date:
11-Mar-2026
Web References:
http://www.cell.com/trends/cognitive-sciences
http://dx.doi.org/10.1016/j.tics.2026.01.003
References:
Sourati et al., “The homogenizing effect of large language models on human expression and thought,” Trends in Cognitive Sciences, March 2026.
Keywords
Artificial intelligence, Large language models, Cognitive diversity, Human thought, AI homogenization, Language models, Reasoning styles, Creativity, Cultural bias, AI ethics

