As urban landscapes embrace the future, integrating technologies collectively termed as “smart city” systems, a new frontier in civic administration and urban living is emerging—one laden with both promise and profound ethical complexities. These smart city technologies automate crucial facets of municipal services, spanning from autonomous dispatch of law enforcement to managing traffic flows with granular, sensor-driven acumen. However, embedding artificial intelligence (AI) into these everyday functions is not without controversy, and has prompted significant discourse about how ethical frameworks should guide the behavior of these innovative, yet opaque, systems.
At the heart of this debate lies an urgent question: how can we ensure that the values guiding AI-driven technologies align with the ethical norms and moral expectations of citizens? Recent research conducted at North Carolina State University propels this conversation forward by advancing a methodical approach that allows us to capture and codify ethical decision-making for smart city applications. This approach pivots on a carefully designed ethical model known as the Agent-Deed-Consequence (ADC) framework, which breaks down moral evaluation into three essential pillars—assessing the moral agent’s intent, the deed itself, and the consequences that follow.
The ADC model, traditionally a lens in human moral philosophy, is being recast into a technical blueprint for AI systems. This recasting is groundbreaking because it operationalizes complex ethical intuitions into precise, programmable logic, allowing AI systems to discern not just what is factually true in their environment, but critically, what ought to be done under varying circumstances. This distinction is vital in the real-world deployment of smart city technology, where AI must make split-second decisions that carry tangible consequences for public safety and fairness.
Consider a scenario all too familiar in urban centers: an AI system that monitors acoustic signals to detect gunfire and automatically dispatch law enforcement. If the system incorrectly interprets a loud noise as a gunshot, it could summon an aggressive police response with severe community implications. Who is accountable for such errors? More importantly, how can AI differentiate between legitimate alerts and false positives in a manner aligned with community values? Current standards lack a unified, principled framework for programming these decisions, leaving a void that the ADC model aims to fill through its robust ethical calculus.
The incorporation of deontic logic—a branch of logic concerned with obligation and permission—into the ADC framework is a vital innovation. Deontic logic allows encoding not only of factual realities but also of normative imperatives, which guide behavioral decisions based on ethical principles. This means that AI embedded with the ADC model can weigh orders or requests against a structured ethical backdrop, recognizing when an action is permissible, obligatory, or forbidden in the context of smart city governance.
Traffic management offers another vivid illustration of the model’s potency. When an ambulance with flashing lights approaches an intersection, an AI system can recognize that prioritizing its passage is a moral imperative, adjusting traffic signals accordingly. Contrastingly, an unauthorized vehicle flashing lights to bypass congestion is flagging an illegitimate request, which the AI should intelligently disregard. It is this nuanced capacity to evaluate context, intention, and outcome simultaneously that separates ethical AI from mechanistic automation.
Beyond individual scenarios, the challenge remains to validate this alignment of ethical AI behavior across the diverse spectrum of smart city infrastructures. The researchers underscore the importance of rigorous simulation testing that replicates real-world complexities, ensuring that the ADC model operates consistently and predictably. Successfully navigating this validation phase would mark a transformative moment, as ethical AI decision-making becomes a cornerstone feature embedded directly into the fabric of smart city networks globally.
While traditional human communication allows ethical guidelines to be explained and internalized through dialogue and education, this human adaptability cannot be directly transferred to AI. Instead, computational systems require a mathematically rigorous formula that transparently captures the chains of moral reasoning, enabling consistent application without ambiguity. The ADC model elegantly fulfills this necessity, bridging philosophy and computer science to empower AI systems with transparent, reproducible ethical decision-making.
This integration of ethical philosophy, formal logic, and AI technology offers promising new avenues for civic leaders and technologists wrestling with the rapid advancement of urban automation. As cities worldwide increasingly depend on AI to manage everything from surveillance to emergency response, embedding ethical accountability directly into these systems will be crucial not only to maintain public trust but also to uphold the democratic values that underpin urban communities.
The implications of this research extend far beyond smart city applications and signal a broader shift in how society may govern AI technologies moving forward. By developing frameworks that respect human ethics and translate them into actionable computational directives, researchers are laying the groundwork for AI systems that support—not undermine—the social contracts upon which modern life depends.
The pioneering work published in the open access journal Algorithms by Veljko Dubljević, Daniel Shussett, and colleagues presents a clear, well-founded pathway toward harmonizing the rapid technological evolution of urban environments with the enduring ethical standards demanded by their inhabitants. The journey ahead will require interdisciplinary collaboration, extensive testing, and an ongoing commitment to examine how value-driven AI can and should operate in our cities.
As smart city technologies become increasingly ubiquitous—from automated policing alerts to intelligent traffic control—the incorporation of ethically informed decision-making frameworks such as the ADC model will undoubtedly play a pivotal role in shaping the future of urban life. Embracing this challenge today can prevent the pitfalls of unchecked automation tomorrow, ensuring our cities evolve not just in efficiency but also in fairness, justice, and respect for all citizens.
Subject of Research: Not applicable
Article Title: Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics
News Publication Date: 3-Oct-2025
Web References:
https://www.mdpi.com/1999-4893/18/10/625
http://dx.doi.org/10.3390/a18100625
References:
Dubljević, V., Shussett, D., et al. “Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics,” Algorithms, 2025.
Image Credits: Not provided
Keywords: Smart cities, AI ethics, Agent-Deed-Consequence model, deontic logic, moral AI, urban automation, ethical decision-making, autonomous policing, traffic AI, ethical frameworks, artificial intelligence, civic technology