AI in the Operating Room: Rising Reports of Injuries and Device Malfunctions

Artificial intelligence is being promoted as the next major leap in medicine, promising more accurate diagnoses, improved surgical planning and fewer medical errors. Yet as AI becomes increasingly embedded in operating rooms, regulators are recording a troubling rise in reports of medical device malfunctions, serious patient injuries and cases in which software misidentifies anatomical structures.

According to an in-depth Reuters investigation, one notable example involves a navigation system used in sinus surgery that was enhanced with machine-learning algorithms. Reports submitted to the U.S. Food and Drug Administration (FDA) link the system to dozens of malfunction incidents and at least ten serious patient injuries over the past four years.

The manufacturer announced in 2021 that it had integrated artificial intelligence into its surgical tool navigation platform for the treatment of chronic sinusitis. Prior to that year, reports to the FDA were limited. Following the introduction of AI, the number of reported malfunctions and adverse events rose sharply.

According to the data, several incidents involved cases in which the system allegedly misled surgeons about the position of instruments inside patients’ skulls. Reported injuries include cerebrospinal fluid leaks, perforation of the skull base and, in the most severe cases, strokes caused by carotid artery damage.

The companies involved have категорically rejected any causal link to artificial intelligence, stressing that FDA reports do not establish device fault. Nevertheless, two cases have proceeded to court, with plaintiffs arguing that the system was safer before AI was integrated.

The system in question is not an isolated case. The FDA has now cleared more than 1,350 medical devices that incorporate artificial intelligence — double the number approved in 2022. Alongside this surge in approvals, reports of problems have also increased.

Reported incidents include prenatal ultrasound systems that allegedly misidentified fetal limbs, AI-enabled implantable cardiac monitors accused of failing to detect abnormal heart rhythms — including devices manufactured by Medtronic — and ultrasound software developed by Samsung Medison that was reported to confuse anatomical landmarks, though no injuries have been confirmed in those cases.

A study by researchers at Johns Hopkins, Georgetown and Yale found that dozens of AI-enabled medical devices have been linked to product recalls, many of them occurring within the first year after regulatory approval.

Regulatory oversight has struggled to keep pace with the rapid expansion of the technology. According to current and former FDA scientists, key AI evaluation teams have been weakened in recent years, increasing the workload for remaining staff.

Unlike pharmaceuticals, many medical devices — even those incorporating AI — are not required to undergo extensive clinical trials before reaching the market. Instead, they are often approved as “updates” to existing products, a practice that experts say introduces new layers of uncertainty when algorithms capable of learning and evolving over time are added.

Despite the risks, artificial intelligence has already demonstrated its potential benefits, particularly in radiology and cancer detection. However, the same features that make AI powerful — self-learning capabilities, complexity and limited algorithmic transparency — also make failures harder to detect and control.

As one medical regulation expert quoted by Reuters puts it, the traditional medical device approval model “was not designed for technologies that change their behavior over time.” And as artificial intelligence moves deeper into the operating room, the question is no longer whether it will transform medicine, but whether regulatory safeguards can keep up.