New in the Hastings Center Report, May-June 2018
Genetic Privacy, Disease Prevention, and the Principle of Rescue
Madison K. Kilbride
People who undergo genetic testing sometimes discover that they carry mutations that increase their risk of hereditary disease. Even if they want to keep their genetic information private, are they ethically obligated to tell family members who might also be affected? Madison K. Kilbride, a postdoctoral fellow in the Department of Medical Ethics and Health Policy at the Perelman School of Medicine at the University of Pennsylvania, argues that the duty for patients to warn their biological relatives is grounded in the principle of rescue: the idea that one ought to minimize the risk of harms to others when the expected harm is serious and the cost to oneself is minimal. This framework, Kilbride argues, avoids unnecessarily exposing patients' genetic information while still protecting their relatives.
Another Voice :
S. Matthew Liao and Jordan Mackenzie don't think that the principle of rescue sufficiently grounds the duty to warn one's relatives about genetic risks. Instead, people ought to disclose this information because of their special obligations to their family members. Liao is the Arthur Zitrin chair of bioethics and the director of the Center for Bioethics at New York University. Mackenzie is an assistant professor and faculty fellow at the New York University Center for Bioethics.
Perspective: Groundhog Day for Medical Artificial Intelligence
Alex John London
The integration of artificial intelligence in medicine raises concerns "not only because of the specter of machines making life-and-death decisions but also because of the prospect of machines encroaching on realms of decision-making revered as the province of expert professionals," writes Alex John London, the Clara L. West professor of ethics and philosophy and the director of the Center for Ethics and Policy at Carnegie Mellon University. But humans are prone to lapses in judgement, and in some cases, medical AI systems can make more precise decisions than clinicians. London argues that the challenge of utilizing medical AI is a human one: for clinicians to be cognizant of their own limitations and to rely, when necessary, on others, including machines, to do the best possible job.
Also in this issue:
- Incidental Findings in Low-Resource Settings
- Capacity for Preferences: Respecting Patients with Compromised Decision-Making
- Policy and Politics: Change without Change? Assessing Medicare Reimbursement for Advance Care Planning
Susan Gilbert, director of communications
The Hastings Center