
ECRI tested large language models with questions about medical products and technologies and received dangerously inaccurate information. [Illustration by Kudryavtsev via Stock.Adobe.com]
Powered by large language models (LLMs), artificial intelligence (AI) chatbots confidently provide facts that are often wrong, a dangerous situation for people seeking medical information.
Tech developers are increasingly integrating these chatbots into consumer software and devices without adequate protection for patients, physicians or anyone else who might use them to help diagnose or treat an illness or disease. These chatbots are not regulated as medical devices or validated for healthcare use.
For those reasons, the Emergency Care Research Institute (ECRI) has identified the misuse of AI chatbots in healthcare as the biggest health technology hazard of 2026.
“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” ECRI President and CEO Dr. Marcus Schabacker said in a news release. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”
Related: GE HealthCare’s chief AI officer offers tips and advice for working with artificial intelligence
ECRI urged caution whenever using chatbots for information that could impact patient care.
“Rather than truly understanding context or meaning, AI systems generate responses by predicting sequences of words based on patterns learned from their training data,” ECRI explained. “They are programmed to sound confident and to always provide an answer to satisfy the user, even when the answer isn’t reliable.”
Each year, the healthcare safety nonprofit identifies the top safety risks from medical devices and systems to help medical device developers, healthcare providers and policymakers understand the dangers, mitigate the risks and prevent harm, not only for patients, but for their physicians and care teams.
Last year, ECRI flagged “risks with AI-enabled health technologies” as the top risk of 2025.
ECRI develops its annual list based on device-related event reports (including adverse events and near misses), lab testing, observations and assessments of hospital operations, literature reviews and conversations with clinicians, clinical engineers, device suppliers and other key stakeholders.
ECRI tested LLMs with questions that nurses, clinical engineers or supply chain managers might ask about medical products and technologies and received dangerously inaccurate information. In two of the tests, LLMs recommended products that put patients and health care providers at risk of infection.
In another test asking LLMs about whether an electrosurgical return electrode could be placed over a patient’s shoulder blade, three of the four LLMs correctly warned against that due to the increased risk of burns.
But the fourth LLM “gave dangerous advice, incorrectly stating that placement over the shoulder blade was appropriate and even recommended in many surgical procedures. Further, the LLM misinterpreted the guidance from reputable sources to support its response.”
Related: Advice from J&J MedTech’s global digital head on understanding user needs, building trust in AI, digitization efforts and more
ECRI offered recommendations for healthcare organizations that are also good advice for medical device developers, including educating employees about the need to carefully scrutinize LLM output and validate LLMs intended for patient interaction with scenario-based testing to explore ” typical, edge, and misuse cases, ideally using real-world data, to identify safety and equity risks before releasing the patient-facing version.”
ECRI also recommended organizations launch AI governance committees to “oversee validation before deployment, ensure continuous monitoring/reporting for safety incidents, and periodically revalidate tools following events such as software or model updates.”
ECRI’s top 10 health technology hazards of 2026 are:
- The Misuse of AI Chatbots in Healthcare
- Unpreparedness for a “Digital Darkness” Event
- The Growing Challenge of Combating Substandard and Falsified Medical Products
- Recall Communication Failures for Home Diabetes Management Technologies
- Tubing Misconnections Remain a Threat Amid Slow ENFit and NRFit Adoption
- Underutilizing Medication Safety Technologies in Perioperative Settings
- Deficient Device Cleaning Instructions Continue to Endanger Patients
- Cybersecurity Risks from Legacy Medical Devices
- Technology Designs or Configurations That Prompt Unsafe Clinical Workflows
- Water Quality Issues During Instrument Sterilization
The full report is only available for ECRI members (Medical Design & Outsourcing received a complimentary copy), but you can read more about each threat in an executive brief that can be downloaded for free here.
Related: Five tips from Philips for building trust in medtech AI
link

