In the rapidly evolving landscape of artificial intelligence (AI), the integration of ethical oversight with technological advancements has become paramount. Researchers from Carnegie Mellon University and the University of Michigan have pioneered a novel framework, termed the capabilities approach-contextual integrity (CA-CI), designed to navigate the complex challenges surrounding privacy and human dignity posed by modern AI systems. This framework is particularly relevant for foundation models—advanced AI systems characterized by their adaptability and expansive contextual applications.
CA-CI offers a sophisticated fusion of two foundational concepts: contextual integrity, a theory of privacy originally developed by Helen Nissenbaum, and the capabilities approach, a human-centric normative theory advanced by Martha Nussbaum that defines essential conditions for a dignified life. By harmonizing these perspectives, CA-CI establishes a robust mechanism to evaluate AI governance comprehensively across varying socio-technical contexts and evolving operational landscapes.
At the core of CA-CI lies a reconceptualization of information flows in AI systems. Unlike traditional frameworks that depend on stable, observable contexts, CA-CI elevates the purpose behind data use as a fundamental parameter. This elevation allows for more sensitive detection of “scope creep” where data originally intended for specific uses is repurposed in incompatible ways, potentially jeopardizing privacy norms and individual dignity. This dynamic recognition is crucial in AI governance, where autonomous learning and cross-domain integration continuously reshape data interactions.
The incorporation of dignity thresholds marks CA-CI’s groundbreaking contribution to AI ethics. Rooted in the capabilities approach, these thresholds provide universal benchmarks that define the minimum conditions necessary for humans to lead dignified lives. When AI systems infringe upon these thresholds, CA-CI identifies such instances as significant harms, thus furnishing clearer standards for harm assessment beyond abstract notions of fundamental rights.
This combined framework is not purely theoretical but directly applicable to contemporary regulatory environments, notably exemplified by the European Union’s AI Act passed in 2024. The legislation is pioneering in its emphasis on safeguarding fundamental rights and imposing rigorous impact assessments for high-risk AI deployments. However, it stops short of precisely delineating what constitutes dignity violations. CA-CI addresses this regulatory gap by operationalizing dignity within fundamental rights impact assessments, ensuring that AI governance is both principled and practicable.
One of the pressing challenges in AI oversight is the opacity inherent to complex, autonomous models. Unlike traditional software systems, these AI models continuously learn, generalize, and adapt, complicating traceability and accountability. The CA-CI framework counteracts this by structuring governance around both social context and human dignity, rather than relying solely on technical observability. This makes the governance mechanism resilient to the evolving nature of AI technologies.
Moreover, CA-CI supports anticipatory governance—a proactive approach that identifies and mitigates emerging risks before they culminate in tangible harms. By embedding dignity-based criteria, the framework enables regulators and stakeholders to foresee potential violations that have yet to be codified in law or policy, thereby enhancing the agility and responsiveness of AI governance mechanisms.
The framework also resonates deeply with the EU’s General Data Protection Regulation (GDPR), which enshrines the purpose limitation principle. CA-CI extends this principle by not only enforcing purpose specificity in data processing but also ensuring that deviations which threaten dignity are systematically flagged. This capability is particularly vital in an era where AI systems frequently repurpose data in unprecedented ways, often without explicit consent or clear regulatory guidance.
The interdisciplinary nature of CA-CI—with contributions from information science, computer science, ethics, and law—underscores the necessity of collaborative approaches to AI governance. Lead researcher Kat Roemmich’s foundational dissertation navigates these domains, bridging theoretical constructs with practical governance strategies. Meanwhile, coauthors Kirsten Martin and Florian Schaub bring deep expertise in information policy and technical privacy mechanisms, enriching the framework’s applicability and rigor.
Implementation of CA-CI within organizational and regulatory settings promises enhanced accountability and ethical compliance. It equips evaluators and AI providers with concrete tools to discern when AI practices transgress moral and legal boundaries rooted in dignity. Importantly, by clarifying harm thresholds, CA-CI enhances the enforceability of AI governance and contributes to public trust in AI systems.
The publication of this research in the prestigious IEEE Security & Privacy journal signals both its technical depth and practical significance. The work invites broader discourse among AI developers, policymakers, and ethicists to embrace more nuanced and principled governance frameworks as AI permeates every facet of society.
Ultimately, CA-CI exemplifies the kind of principled innovation necessary to reconcile technological progress with fundamental human values. As AI grows ever more complex and ubiquitous, frameworks like CA-CI will be indispensable in ensuring that these powerful tools serve humanity without compromising privacy, dignity, or rights.
Subject of Research:
Integrating privacy and dignity considerations into AI governance through a novel framework combining contextual integrity and the capabilities approach.
Article Title:
CA–CI: Integrating Contextual Integrity and the Capabilities Approach for Dignity Considerations in AI Governance
News Publication Date:
3-Feb-2026
Web References:
DOI: 10.1109/MSEC.2026.3654404
Keywords:
Generative AI, Machine learning, Computer science, Information processing, Information access, Information retrieval, Search engines, Open access, Big data, Statistics

