Skip navigation Jump to main navigation

Automation Complacency: Navigating the Ethical Challenges of AI in Healthcare

On November 13, the Columbia School of Professional Studies M.S. in Bioethics program hosted an urgent panel discussion, Automation Complacency: AI-Induced Abdication of Medical Decision-Making, examining the legal and ethical challenges surrounding artificial intelligence implementation in healthcare settings.

The event, led by David N. Hoffman, J.D., Assistant Professor of Bioethics at Columbia University, brought together an international panel of experts, including Emily Beer, J.D., Faculty Associate at Columbia University, Chair of the Bioethical Issues Committee at the New York City Bar Association, and Bioethics program alum; Camille Castelyn, Ph.D., Post-Doctoral Fellow at the Centre for Ethics and Philosophy of Health Sciences, University of Pretoria, and a Bioethics program alum; Joel Janhonen, Doctoral Researcher in Paediatrics and Adolescent Medicine at the University of Turku; Larry Medsker, Ph.D., Research Professor at the University of Vermont and Director of Science & Technology Ethics Policy at George Mason University; and Michael I. Saadeh, Researcher at Innovative Bioethics Forum.

Hoffman opened the program with an alarming real-world example: an AI algorithm used by insurance companies to determine post-acute care placement that recommended discharging a wheelchair-bound patient to their home, unaware the patient lived in a fifth-story walk-up apartment. This case illustrated how algorithms can make seemingly reasonable recommendations while missing critical contextual information not captured in medical records. The case made the critical point that, at least for now, AI algorithms don’t know what they don’t know, and therefore can’t replicate the human capacity for humility.

The discussion centered on a recent paper by five panel members, published in the journal AI and Ethics, which is edited by the sixth panel member, Larry Medsker. Their paper introduced critical concepts for understanding AI risks in clinical settings. Michael Saadeh, the paper's lead author, explained two fundamental psychological and behavioral phenomena: automation bias (the psychological inclination to defer to decision support tools) and automation complacency (the clinical trap that occurs when clinicians rely on AI systems without recognizing the critical need for  proper human oversight). 

As Saadeh explained, "If the bias is the psychological hole, the automation complacency is the clinical trap because of that hole." The panel distinguished between appropriate "decision support," where clinicians use AI to assist them in deploying their expertise, and  "decision substitution," where clinicians defer judgment entirely to automated systems without recognizing the problematic limitations of those algorithms.

Beer provided a comprehensive legal analysis, discussing recent cases involving the irresponsible deployment of AI in healthcare. She highlighted the 2024 Texas Attorney General investigation into Pieces Technologies, which allegedly made misleading claims about its AI platform's accuracy in generating clinical summaries. Beer also examined lawsuits against major insurers where plaintiffs alleged that AI models were programmed to automatically deny claims for medically necessary care. In one particularly troubling case, a heart surgery patient faced repeated denials and delays in receiving skilled nursing care, forcing her to accept personal responsibility for $67,000 in bills. Beer also discussed the 2020 criminal case against Practice Fusion, which accepted kickbacks to design clinical decision support alerts that increased opioid prescriptions.

Beer emphasized a fundamental principle throughout the legal discussion: "Personalized and holistic care really does require a person who cares." She argued that while AI systems might be able to generate accurate diagnoses, they address only one part of patient wellbeing, missing crucial elements like patient preferences, effective communication, and clinical empathy.

Castelyn explored the sociocultural dimensions of AI implementation, questioning whether the technology can allow universal values to be incorporated into decision support tools across diverse cultural contexts. She illustrated how concepts like Ubuntu from South Africa, oral traditions from West Africa, and high-context communication in Japan present both opportunities and challenges for AI systems attempting to deliver culturally sensitive care.

Janhonen offered a framework for "mindful integration" of AI, presenting a spectrum from tasks ready for automation now to those requiring human judgment with personal and contextual subtleties until the technology improves. He outlined specific design choices to combat complacency, including explainability features, critical decision points requiring substantive human confirmation—not just checking a box, and configurable confidence thresholds that trigger human oversight when AI uncertainty increases.

Medsker, drawing from his work editing an upcoming handbook on AI ethics with 50 diverse chapters, provided historical context, noting that AI research began 70 years ago and many concerns about automated systems were identified decades ago. He emphasized the critical importance of data quality and governance, advocating for "human-centered AI" design. "Human judgment and considerations should complement the automation," Medsker argued, "but not replace the humans."

The panel concluded with a robust discussion of pressing issues, including patient privacy in ambient patient office visit recording systems, liability allocation when AI systems err, and the need for hospitals to codify ethical standards for AI implementation. Throughout the conversation, panelists emphasized that personalized and holistic care fundamentally requires a person who cares, and that maintaining meaningful human oversight at critical decision points remains essential for patient safety and wellbeing.


About the Program

Columbia University’s Master of Science in Bioethics grounds students in interdisciplinary approaches and models to address pressing bioethical challenges such as stem cell research and health-care reform. The program prepares students to act as responsible and responsive leaders in this new and ever-growing field. It also includes a concentration in global bioethics—the first of its kind in the U.S. Columbia's Bioethics program offers a range of degrees and courses. 

Learn more about the program here. The program is available full-time and part-time, online and on-campus. 


Sign Up for the SPS Features Newsletter