In the rapidly evolving landscape of modern surgery, traditional frameworks for understanding the role and impact of technology are becoming increasingly obsolete. The long-standing paradigm that views technologies merely as tools—passive instruments designed solely to serve human intentions—is being challenged by an emerging perspective that recognizes these devices not just as tools but as interactive environments. This shift is crucial, especially in operating rooms where robotic systems and artificial intelligence (AI) are becoming integral components of medical practice.
Post-phenomenological philosophy provides an insightful framework for this new understanding. It urges a departure from thinking about technology purely as a means to an end and instead views it as an environment in which human experiences unfold. Technologies in surgical settings are thus not merely extensions of the surgeon’s hands but complex interactive systems that mediate the relationship between humans and the world around them. This concept of technological mediation highlights that our interactions are fundamentally shaped and structured by these technological environments, changing how surgeons engage with patients and procedures.
Surgical robots and associated AI-driven systems represent this new category of interactive devices. Unlike a scalpel, which is directly wielded by the surgeon, autonomous systems like the “da Vinci” surgical robot or monitoring tools such as the “OR Black Box” possess capabilities that transcend mere passive function. These devices actively engage in the surgical process, responding dynamically to sensory inputs and algorithmic computations, thereby creating a hybrid environment where human and machine actions intertwine in real time.
This fusion has profound ethical implications. The classical approach to surgical ethics, which focuses on human agency and accountability, confronts significant challenges when faced with the hybrid nature of modern surgical interventions. If technology acts independently within the surgical process, traditional notions of responsibility—rooted entirely in human action—must evolve. This ontological shift demands a reconsideration of what it means to be responsible in a context where agency is shared and distributed across both human and non-human actors.
One of the most pressing questions emerging from this shift concerns the allocation of responsibility in cases where surgical outcomes involve semi-autonomous robotic systems. Given the autonomous decision-making embedded within these technologies—powered by machine learning (ML) and AI—direct human control is often limited or mediated. While robots lack consciousness and therefore do not satisfy the essential conditions for moral responsibility, it is equally problematic to assign full liability to the medical team when parts of the surgical action exceed direct human influence.
This dilemma introduces the notion of “hybrid responsibility,” a concept proposed to capture the complex interplay between human decision-making and autonomous machine behavior. Here, responsibility is not abrogated but rather reconfigured: it is shared and bounded by the constraints of human knowledge, control, and foreseeability of outcomes. Robotic surgery thus becomes a space where accountability must encompass multiple agents, including designers, programmers, and operators, while recognizing the operational autonomy of the machines involved.
Expanding on this, the idea of “distributed responsibility” emerges prominently. As articulated by scholars such as Taddeo and Floridi, responsibility in AI-mediated environments is diffused across a network of actors—ranging from developers and clinicians to the embedded algorithms and hardware systems. This multiplicity dilutes the possibility of pinpointing a single, clearly identifiable agent responsible for every action or outcome, reinforcing the view that ethical accountability in such contexts must be collective and systemic.
This paradigm shift is especially vivid in real-time surgical scenarios. For example, a robotic system might autonomously adjust instrument trajectories based on intraoperative sensor data, executing motions beyond the immediate commands of the surgeon. Similarly, AI algorithms may interpret complex imaging or patient data to provide recommendations during surgery. In these instances, attributing singular responsibility becomes untenable, as the outcomes result from a constellation of interactions among humans, machines, and software ecosystems.
The practical implications of this distributed agency raise critical considerations for regulatory frameworks and medical ethics. Current liability models, developed under assumptions of clear human agency and control, risk inadequacy when addressing errors or adverse outcomes linked to autonomous or semi-autonomous technologies. The legal and ethical communities must adapt by developing standards and practices that accommodate the hybrid nature of surgical responsibility, ensuring accountability without stifling innovation and trust in these technologies.
Furthermore, this evolving landscape spotlights the importance of transparency and traceability in robotic and AI systems. Establishing comprehensive documentation of design decisions, programming logic, and operational parameters is vital to update responsibility paradigms meaningfully. Such transparency can help determine the extent to which technical failures, operator actions, or systemic limitations contribute to unforeseen events, facilitating fair and informed assessments of accountability.
Looking ahead, the notion of fully autonomous robotic surgeons presents even more radical ethical challenges. Though current robotic surgery systems are largely semi-autonomous, the prospect of AI-driven machines completely replacing human surgeons invites questions about the very concept of responsibility. In such scenarios, if human intervention becomes minimal or nonexistent, traditional frameworks may fail entirely to account for moral and legal responsibility. While these developments remain speculative at present, they emphasize the need for anticipatory ethical discourse.
Nonetheless, amid these complex challenges lies a pragmatic priority: building and sustaining trust between surgeons and their technological counterparts. As AI assumes increasingly sophisticated roles in the operating room, fostering confidence in these systems is paramount. Trust not only influences the acceptance and efficient use of technology but also shapes ethical engagements with hybrid surgical processes, underscoring shared responsibility and collaboration between human and machine agents.
In conclusion, the integration of advanced robotics and AI into surgical practice signifies a profound ontological and ethical transformation. Moving beyond the archaic model of technology as passive tools, we now inhabit a reality where technologies constitute interactive environments, deeply mediating the human experience. This reality necessitates a recalibration of responsibility, shifting toward distributed, hybrid models that reflect the entwined nature of human and machine agency. As surgical technology continues to evolve, so too must our ethical frameworks, ensuring responsibility is appropriately allocated within this new paradigm while fostering trust and safeguarding patient outcomes.
Subject of Research: Ethical challenges and responsibility frameworks in emerging surgical technologies involving robotics and AI.
Article Title: When the action is “hybrid”–ethical challenges of the emerging technologies in the operating room.
Article References:
Valera, L., Irarrázaval, M.J. & Gabrielli, M. When the action is “hybrid”–ethical challenges of the emerging technologies in the operating room. Humanit Soc Sci Commun 13, 176 (2026). https://doi.org/10.1057/s41599-025-06455-7
Image Credits: AI Generated

