In the rapidly evolving landscape of technology, artificial intelligence (AI) is no longer a futuristic concept confined to laboratories and speculative fiction. Governments worldwide are increasingly integrating AI systems into their operations, seeking transformative efficiencies and novel capabilities within the public sector. However, beneath this wave of adoption lies a stark reality: despite AI’s enormous potential, the majority of government AI initiatives falter, with more than 80% of projects failing to achieve their goals. This conundrum underscores the pressing need for a robust, strategic framework to govern AI deployment in public administration effectively and responsibly.
Currently, approximately 70% of countries employ AI to optimize internal government processes — streamlining workflows, automating routine tasks, and enhancing data analysis. Beyond operational efficiency, about one-third of these nations are harnessing AI to support policy design and implementation, leveraging intelligent systems for scenario simulations, impact assessments, and real-time feedback mechanisms. Some governments are even exploring the radical use of AI as a functional substitute for core governance tasks. However, these innovative ambitions must be tempered with caution, as AI’s intricate intricacies often expose institutional vulnerabilities and risk amplifying existing systemic dysfunctions if implemented without rigorous oversight and thoughtful planning.
This context has propelled a consortium of experts spearheaded by Professor Catherine Régis from the Université de Montréal’s Institute for Data Valorization (IVADO) and Professor Florian Martin-Bariteau from the University of Ottawa to deeply analyze the success factors and pitfalls in AI adoption within public institutions. Their collaborative research culminates in a comprehensive global policy brief titled “Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration in the Age of AI,” which articulates strategic policies to harness AI’s potential while mitigating its risks. This policy manual emerges as a critical tool for governments navigating the intersection of technological innovation and public governance.
At the heart of their findings is the recognition that AI’s transformative power extends far beyond the sophistication of algorithms themselves. Instead, the outcomes hinge profoundly on the institutional frameworks that govern AI integration – including accountability structures, public servant engagement, vendor relations, and institutional resilience planning. Technological prowess alone cannot substitute for robust governance mechanisms that ensure AI’s responsible deployment aligns with ethical standards, public trust, and operational transparency.
Canada exemplifies these principles through its recent initiatives, notably the federal public service’s use of AI platforms that distilled and synthesized over 11,000 stakeholder submissions during consultations for updating national AI strategy. This pragmatic, measured approach reflects a paradigm of “slow, steady, but ambitious” AI adoption—emphasizing deliberate, well-governed integration over hasty implementation. As Catherine Régis notes, such cautious ambition underscores governmental responsibility rather than indecision, emphasizing ethical stewardship in AI governance.
The policy brief recommends an integrated approach, starting with redesigning public services that address real, well-defined problems before injecting AI solutions. This problem-driven methodology ensures AI applications build upon existing successes and are co-created with public servants who intimately understand day-to-day administrative challenges. This ground-up collaboration fosters AI tools that are not just technologically advanced but practically effective and contextually relevant.
Beyond procedural redesign, the authors stress the urgent need to invest in institutional capacities. This entails comprehensive training programs for public servants and establishing interdisciplinary teams that blend expertise in data science, law, ethics, and policy. Such cross-functional teams are essential for navigating AI’s multifaceted effects on governance—balancing technical feasibility with legal compliance and social considerations.
Crucially, the brief also addresses the imbalance of power between the public sphere and private AI vendors. Governments must proactively counteract vendor dominance through collective procurement strategies and cooperative development models. By cultivating open collaboration and sharing AI tools tailored to public requirements, governments can reduce dependency on external proprietary technologies and foster innovation aligned with public values and needs.
At the foundation of this governance framework lies the imperative to embed transparency, accountability, and oversight mechanisms that cultivate public trust. AI systems inherently introduce complexity and opacity, and without adequate disclosure protocols and audit trails, they risk heightening skepticism among both civil servants and citizens. Thus, building a “public trust stack” capable of ensuring resilience underlines ethical AI deployment, safeguarding democratic values amidst technological upheaval.
Florian Martin-Bariteau summarizes this stance emphatically: transformative AI integration is only credible when driven bottom-up by real problem-solving and robust governance frameworks. Absent such anchoring principles, AI risks merely amplifying preexisting public sector malfunctions—exacerbating distrust and inefficiencies rather than resolving them. Transparency, accountability, and participatory design become non-negotiable pillars of effective AI-enabled public administrations.
These policy recommendations have been forged during a high-level week-long global retreat convened in December 2025, involving AI experts spanning continents: North and South America, Africa, Europe, and Asia. This multicultural convergence reflects the universal urgency and shared challenges of AI governance across diverse political and cultural contexts. Supported by Canadian institutions like CEIMIA and Mila and facilitated by international partners, this initiative symbolizes a bold step toward harmonizing AI policy frameworks worldwide.
The research and policy brief emerge from the synergy of IVADO—a cross-sectoral, interdisciplinary Canadian consortium dedicated to responsible AI innovation—and the AI + Society Initiative based at the University of Ottawa, which focuses on ethical, legal, and societal AI implications through transdisciplinary research. Together, these institutions underscore the critical principle that AI governance must be as much about societal values and human rights as about technical innovation and operational efficiency.
As AI continues to reshape the very fabric of governance, the stakes grow higher for public administrations to develop resilient, transparent, and equitable frameworks. This policy brief offers a beacon for governments worldwide: a call to embrace thoughtful, problem-driven AI integration that honors democratic principles and institutional accountability. Only through such a deliberate approach can AI truly become a force for positive transformation in the public domain rather than an accelerant of systemic dysfunction and distrust.
Subject of Research: People
Article Title: Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration
News Publication Date: 9-Mar-2026
Image Credits: Faculty of Law, University of Ottawa
Keywords: Artificial intelligence, AI governance, public administration, public sector AI, policy design, institutional capacity, accountability, transparency, ethical AI, AI implementation risks, public trust, AI and government

