As artificial intelligence technologies accelerate at an unprecedented pace, society faces a critical crossroads: who holds the authority to govern these transformative tools, and by what mechanisms is this governance enacted? Recent research emanating from the Universitat Oberta de Catalunya (UOC) delves deeply into this pressing question, scrutinizing how private technological initiatives are shaping not only the tools themselves but the fundamental frameworks of governance intertwined with identity and biometric verification. This investigation, authored by UOC doctoral researcher Andreu Belsunces Gonçalves in collaboration with Northeastern University’s Laura Forlano, unfolds in a landmark publication within AI & Society, part of the Nature group’s expansive portfolio.
The study undertakes a thorough case examination of the ambitious project World, formerly Worldcoin, co-founded by OpenAI CEO Sam Altman. World proffers a futuristic vision where human users verify their humanity by submitting to iris scans, thereafter obtaining a digital identity certificate. This biometric authentication is not merely a technical novelty; it symbolizes the inception of a new paradigm where private entities assume control over digital identity and governance—a domain traditionally reserved for public institutions. The implications extend far beyond the mechanics of iris scanning; this infrastructure subtly rewrites the social contract underpinning digital citizenship and institutional legitimacy.
Importantly, Belsunces Gonçalves and Forlano advance the analytical concept of “sociotechnical fictions” to unpack these narratives. These fictions operate as powerful narratives about the future—depicted as inevitable and urgent technological progressions—that profoundly influence the trajectories of technological design and governance frameworks. In this context, sociotechnical fictions cloak technological decisions with a veneer of necessity, thereby constraining public debate and limiting democratic engagement in decision-making processes surrounding emerging AI infrastructure.
The research highlights the strategic use of emotional appeals in shaping public perception. By invoking fear of bots, fraud, and impersonation alongside hope for enhanced security and inclusion, initiatives like World craft compelling narratives that galvanize social support. This emotional duality fosters a sense of technological inevitability, persuading stakeholders and users to acquiesce to private governance models under the guise of progress and protection. Through sleek, user-centric design, these platforms further embed themselves into the daily fabric of users’ digital lives, normalizing the privatization of identity governance.
Crucially, the study delineates the political ramifications of this shift. It warns that the privatization of identity and governance functions risks eroding the legitimacy of democratic institutions by carving out parallel systems of authority. The transformation is couched within a broader ideological shift originating from cyberlibertarianism—an ethos that emerged in the 1980s and champions fundamental individualism, the diminished role of democratic processes, and the supremacy of market-driven engineering solutions over political deliberation. This ideological lens sheds light on the motivations and visions propelling projects like World, exposing tensions between democratic ideals and emerging techno-commercial governance models.
While these projects often benefit from vast public funding in their development phases, their ultimate deployment realigns power away from collective democratic stewardship toward privatized governance regimes. This paradox underscores the complex entanglement between state resources and private innovation ecosystems, raising profound questions about accountability, transparency, and rights in digital spaces. The research urges vigilance towards how these funding flows potentially subsidize infrastructures that could undermine long-established democratic norms.
From a technical standpoint, the reliance on biometric data—specifically iris scanning—to authenticate identity introduces critical security and privacy considerations. Iris recognition technologies employ sophisticated pattern-matching algorithms that analyze unique ocular features, offering high levels of accuracy and resilience against spoofing attempts. However, the centralization of such sensitive biometric identifiers within private systems presents substantial risks, including potential misuse, surveillance, and challenges in data sovereignty. The study calls for a nuanced discourse that balances the technological capabilities of biometric AI with robust ethical frameworks and public oversight mechanisms.
The framing of future AI scenarios as unavoidable fosters a self-reinforcing cycle: sociotechnical fictions solidify collective expectations, which in turn catalyze concrete technological deployments that reaffirm the original narrative. This recursive dynamic can marginalize alternative governance models and stifle critical assessments of AI’s societal impacts. Such feedback loops amplify the need for interdisciplinary scholarship and policy interventions that critically interrogate not just how AI is built, but the sociopolitical imaginaries that shape its evolution.
In addition to theoretical contributions, this research provides conceptual tools to dissect the interplay between narratives, emotional drives, and the design choices underpinning digital infrastructures. It encourages stakeholders—including technologists, policymakers, and civil society—to recognize the power of speculative futures in molding present realities. Understanding these dynamics is vital to shaping AI governance paradigms that are inclusive, transparent, and accountable, resisting the slide toward privatized techno-authoritarian regimes.
Moreover, the study’s timing aligns with global discussions surrounding AI policy, ethical standards, and data governance, offering valuable insights for those engaged in the United Nations Sustainable Development Goals—particularly SDG 16, which emphasizes peace, justice, and strong institutions. By foregrounding the governance of identity and AI infrastructure as a cornerstone of democratic resilience, this work positions itself at the nexus of technological innovation and socio-political stewardship.
Ultimately, this research from the Universitat Oberta de Catalunya exemplifies transformative interdisciplinary inquiry that bridges computer science, sociology, political science, and ethics. It challenges the AI community and broader society alike to critically evaluate which futures are being designed, for whom, and under whose authority. As AI’s footprint continues to expand across every sector of human activity, grappling with these fundamental questions of governance and legitimacy becomes indispensable to securing a future where technology empowers rather than undermines democratic ideals.
Subject of Research: Not applicable
Article Title: World(coin) in the AI future: how sociotechnical fictions are instrumental to the cyberlibertarian transition
News Publication Date: February 18, 2026
Web References:
- https://doi.org/10.1007/s00146-026-02913-1
- https://link.springer.com/article/10.1007/s00146-026-02913-1
References:
Belsunces Gonçalves, A., Forlano, L. World(coin) in the AI future: how sociotechnical fictions are instrumental to the cyberlibertarian transition. AI & Soc (2026).
Keywords: Artificial intelligence, Technology, Economics, Political science, Social research, Sociology

