As artificial intelligence technologies become ever more integrated into daily life, a growing body of research is beginning to uncover a subtle yet profound challenge posed by generative AI systems—particularly large language models such as ChatGPT. These models, revered for their ability to generate human-like text and assist with tasks from writing to answering complex questions, reveal a tendency to produce content that is overwhelmingly generic and mainstream. This pattern, experts warn, risks constricting the diversity of perspectives users encounter, with far-reaching implications for culture, memory, and democratic discourse.
At the heart of this issue lies the architecture and training methodology of large language models (LLMs). These systems are trained on vast datasets scraped from the internet, the majority of which contain English-language materials dominated by popular content. The models utilize statistical frequency and pattern recognition to generate responses, prioritizing the most common narratives and viewpoints. While this approach ensures fluency and reliability in many contexts, it also means that less prevalent or marginalized voices are frequently overshadowed or omitted entirely, inadvertently reinforcing a narrow worldview.
Professor Michal Shur-Ofry of the Hebrew University of Jerusalem, a legal scholar and AI governance expert, has articulated deep concerns regarding this phenomenon. In her recent publication in a prominent legal journal, she emphasizes that as LLMs become a trusted source of information, their repetition of the same mainstream answers could significantly reduce the exposure of users to a broad spectrum of cultural narratives and alternative perspectives. She argues that this homogenization of information not only affects individual cognition but also undermines social tolerance and the collective memory societies rely upon to maintain cohesion and diversity.
To illustrate this, Shur-Ofry and her colleagues conducted a series of inquiries with ChatGPT, asking the model to name important figures of the nineteenth century or the best television series globally. The AI’s responses, while accurate, were heavily skewed toward Anglo-American figures and popular English-speaking media, overlooking countless significant contributions from non-English-speaking cultures and smaller communities. This narrow concentration is the direct consequence of the training data’s composition and how AI systems statistically prioritize information frequency.
What’s more concerning is the self-reinforcing cycle embedded in current AI development. Outputs generated by LLMs are increasingly used as training data for successive AI models. Thus, the dominance of popular narratives is perpetuated and even amplified across generations of AI, gradually concentrating the “universe” of information these models present. This feedback loop risks further marginalizing underrepresented voices and the richness of global cultural memory, potentially leading to a gradual erosion of diversity in public discourse.
Shur-Ofry’s work calls attention to the inadequacy of current AI governance frameworks, which typically emphasize transparency, privacy, and data security. While these principles are essential, they fall short of addressing the narrowing-world effect intrinsic to generative AI outputs. She proposes a novel regulatory principle, which she terms “multiplicity,” aimed at ensuring AI systems actively promote exposure to diverse narratives and encourage critical engagement from users.
Multiplicity, according to Shur-Ofry, should be an ethical and legal cornerstone in the design and deployment of AI. AI developers should implement mechanisms that make users aware of the existence of multiple viewpoints, including alternative, less mainstream, or culturally diverse content options. This might involve technical features such as adjustable model “temperature” settings, which can diversify generated content, or user interface designs that clearly indicate the presence of multiple plausible answers rather than a singular authoritative response.
Complementing this, Shur-Ofry stresses the pressing need for widespread AI literacy. Users must be educated to understand the statistical biases inherent in LLM outputs and trained to engage with AI-generated content critically. Such literacy enables users to approach AI not as a definitive oracle but as a tool whose answers reflect certain popular trends in data, thereby encouraging follow-up questioning and comparison of multiple sources.
The societal stakes of this research extend beyond technology. Cultural diversity and democratic vitality depend on a plurality of voices and narratives. When AI systems — increasingly intermediaries of information — veer toward a monoculture of ideas, they threaten to erode the social fabric, diminish empathy for marginalized communities, and impair collective memory. Shur-Ofry’s vision is to harness AI’s efficiency while preserving the complexity and richness of human experience.
To translate multiplicity from concept to reality, Shur-Ofry and collaborators at the Technion and Hebrew University’s Computer Science departments are pioneering methods to broaden the diversity of LLM outputs. Their ongoing research aims to introduce straightforward, practical interventions that adjust model parameters and encourage ecosystem diversity, enabling users to consult multiple AI platforms for a richer “second opinion.” These technical innovations are crucial to ensuring multiplicity is not just theoretical but embedded into everyday AI interactions.
In an era when information ecosystems are vulnerable to polarization and echo chambers, AI’s tendency to generate mainstream answers could exacerbate social fragmentation. Nonetheless, by consciously integrating multiplicity into AI governance and promoting AI literacy, society can steer these powerful technologies toward enhancing inclusivity, cultural preservation, and critical democratic practices. This future-oriented approach calls for urgent deliberation among policymakers, technologists, and users alike.
Ultimately, this emerging legal and ethical framework marks a paradigm shift in AI governance. Instead of simply managing risks related to privacy or bias, multiplicity advocates for fostering a landscape of AI-driven diversity—protecting the full spectrum of human culture and experience in an increasingly digital and automated world. Addressing the narrowing world wrought by AI is not merely a technical challenge but a societal imperative, ensuring these transformative tools enrich rather than impoverish the human condition.
Subject of Research: Not applicable
Article Title: Multiplicity as an AI Governance Principle
News Publication Date: 30-Jun-2025
Web References: https://arxiv.org/abs/2411.02989
Keywords: Artificial intelligence, Computer science