In recent years, artificial intelligence (AI) has been heralded as a transformative force in public administration, promising to enhance the efficiency and speed of welfare distribution systems. However, the implementation of AI in such sensitive domains has revealed deep-rooted ethical and societal challenges. A poignant example emerged from Amsterdam, where an AI pilot program called “Smart Check” was deployed to combat welfare fraud by analyzing a complex array of personal data points. Although designed to streamline decision-making, the system flagged applications deemed “high-risk” for further investigation, disproportionately targeting vulnerable populations including immigrants, women, and parents. This led to widespread criticism and eventual suspension of the system, drawing attention to the risks of bias and lack of transparency in AI-driven public services.
This case underscores a fundamental conundrum at the intersection of technology and social policy: AI systems, while promising operational gains, risk perpetuating existing inequalities and eroding public trust. Vulnerable groups often bear the brunt of these unintended harms, facing opaque processes that complicate contestation and redress. Recognizing these challenges, a collaborative research effort between the Max Planck Institute for Human Development and the Toulouse School of Economics embarked on an ambitious investigation into public attitudes toward AI in welfare allocation. Their study, published in Nature Communications, surveyed over 3,200 participants across the United States and the United Kingdom, seeking to understand the nuanced perspectives of both welfare claimants and non-claimants.
The central inquiry of the study addressed a realistic and ethically fraught trade-off: would individuals accept faster welfare decisions by machines if these came at the cost of increased erroneous rejections? Participants were presented with scenarios contrasting human administrators who processed claims with longer wait times against AI systems that could expedite decisions but introduced a 5 to 30 percent greater risk of incorrect denials. A striking divergence emerged between social benefit recipients and the general population; while non-recipients were relatively open to accepting minor losses in accuracy for speed, those relying on social benefits exhibited significantly higher skepticism toward AI-based adjudication.
Lead author Mengchen Dong, a research scientist specializing in the ethical dimensions of AI, emphasizes a critical misalignment in policy-making: the assumption that aggregate public opinion sufficiently captures the preferences of all stakeholders is dangerously flawed. Her findings reveal that social welfare recipients not only harbor more profound reservations about AI but also feel misunderstood and marginalized in the discourse about technological adoption. This asymmetry is further complicated by the tendency of non-recipients to overestimate the trust that welfare claimants place in AI, a misperception that persists despite financial incentives aimed at enhancing empathetic understanding.
Methodologically, the researchers employed a series of controlled experiments simulating authentic decision dilemmas. Participants were tasked with choosing their preferred adjudication pathway, either from their personal stance or by adopting the vantage point of the alternative group. This perspective-shifting technique was designed to foster empathy and better grasp the heterogeneous attitudes across demographic divides. In the UK cohort, researchers strategically balanced the sample between Universal Credit recipients and non-recipients to rigorously capture discrepancies, while controlling for variables such as age, gender, education, income, and political orientation that might influence trust in AI.
Efforts to bridge the divide through incentives and assurances met limited success. Financial rewards for accurate perspective-taking did little to rectify the systematic misjudgments held by non-recipients. Similarly, introducing the concept of an AI decision appeal process—where claims could be contested by human administrators—only marginally increased participants’ trust in AI decision-making. These results underscore the complexities in cultivating meaningful trust and acceptance, highlighting that procedural safeguards alone are insufficient to overcome deep-seated skepticism.
Importantly, the study reveals a broader political dimension: acceptance or rejection of AI in welfare distribution is interwoven with overall trust in government institutions. Both welfare claimants and non-claimants who were wary of AI systems also expressed diminished confidence in the administrations deploying these technologies. This skepticism poses a significant barrier to the successful integration of AI in public services, as diminished institutional trust undermines not only acceptance but engagement with welfare programs.
The research team advocates for a fundamental reevaluation of how AI systems for public welfare are designed and implemented. They caution against relying solely on aggregated data or majority opinion to guide development processes. Instead, there is a pressing need for participatory frameworks that actively incorporate the lived experiences and perspectives of vulnerable groups most affected by AI-enabled decisions. Without such inclusive approaches, there is a real possibility of exacerbating existing inequalities and generating cycles of distrust that ultimately compromise the efficacy and legitimacy of public administration.
Looking ahead, this research sets a precedent for ongoing empirical inquiries into AI governance in social policy contexts. Building upon their findings in the US and UK, the investigators plan to leverage infrastructures such as Statistics Denmark to engage directly with vulnerable populations and capture a richer tapestry of viewpoints. This cross-national collaboration will deepen understanding of how AI systems impact social welfare delivery and identify mechanisms to align technological innovation with principles of fairness, transparency, and social justice.
The findings also call for policymakers to recognize AI’s dual-edged nature in welfare administration. While AI can expedite service delivery and potentially reduce administrative burdens, this efficiency must not come at the expense of fairness or procedural rights. As such, transparent explanation of AI decision criteria, accessible appeal mechanisms, and participatory design processes must be regarded as integral, not optional, components of AI deployment in the public sector. Only by embedding these values can governments harness AI’s potential while safeguarding the dignity and rights of society’s most vulnerable.
This study advances the discourse on AI ethics by illustrating that technology adoption in public welfare schemes is as much a social challenge as a technical one. It challenges assumptions about universal acceptance of AI and spotlights the critical role of social context, trust, and inclusion in mediating technological impact. The results compel researchers, policymakers, and technologists to engage beyond traditional efficiency metrics and cultivate AI systems that genuinely reflect the diverse needs and concerns of all stakeholders.
In conclusion, the experience of the Amsterdam “Smart Check” pilot, combined with comprehensive survey-based research, reveals an urgent call to rethink AI integration into welfare systems. Without deliberate inclusion of marginalized voices and attentive governance, AI risk becoming yet another mechanism of exclusion and disenfranchisement rather than empowerment. Embracing participatory design and fostering genuine dialogue with vulnerable communities will be essential to building just, trustworthy, and effective AI-powered public services for the future.
Subject of Research: People
Article Title: Heterogeneous preferences and asymmetric insights for AI use among welfare claimants and non-claimants.
News Publication Date: 29-Jul-2025
Web References: DOI: 10.1038/s41467-025-62440-3
Image Credits: MPI for Human Development
Keywords: Social research, Artificial intelligence