No one's been sued successfully. Yet.
As of early 2026, there isn't a single fully resolved malpractice case where a court held an AI chatbot liable for bad medical advice. That sounds reassuring. It isn't.
What it means is that the legal system hasn't caught up to the technology. Cases are being filed. Frameworks are being written. Insurers are rewriting policies. The precedent-setting lawsuits are in the pipeline. They just haven't made it through to the end yet.
For small practices, this is the worst kind of legal environment: active risk with no clear rules.
Three parties holding the bag
When AI-generated medical advice goes wrong, liability doesn't land on one obvious party. It fragments across three.
Physicians. If a doctor accepts AI-generated output without independent clinical review and that output leads to patient harm, the physician can be held liable. The legal standard here isn't new. Doctors have always been responsible for the tools they use. What's new is that the tools are now generating recommendations that look authoritative, and the temptation to trust them without scrutiny is real.
Health systems and organizations. Hospitals and practices that deploy patient-facing AI tools take on institutional liability. If the tool wasn't properly vetted, if there's no human oversight protocol, or if the organization knew about failure modes and didn't address them, they're exposed. This is standard negligent deployment theory, applied to a new category of tool.
Developers. The companies building these systems face product liability claims for design defects. If a chatbot consistently gives dangerous advice in predictable scenarios, that's a design problem. And unlike a pharmaceutical company, most AI chatbot developers don't carry the equivalent of product liability insurance for medical contexts.
The insurance industry is already moving
Malpractice insurers don't wait for precedent. They price risk ahead of it.
AI-related malpractice claims rose 14% between 2022 and 2024. That's not a massive spike, but it's enough to trigger industry-wide policy changes. Insurers are now adding AI-specific exclusions to standard malpractice policies. Some require documented AI training as a condition of coverage. Others are writing riders that explicitly exclude liability from patient-facing AI tools that weren't pre-approved.
If you're a small practice and you deploy a chatbot on your website that gives a patient bad advice, your malpractice policy may not cover that interaction. Not because the insurer reviewed the chatbot and rejected it. Because the policy was rewritten six months ago to exclude the entire category, and nobody told you.
States aren't waiting either
Over 250 AI-related bills were introduced in 47 states during 2025. Not all of them passed. But the legislative direction is clear, and a few states are setting the pace.
Illinois (2025) now prohibits unlicensed AI systems from making independent therapeutic decisions. Violations carry penalties up to $10,000 per incident. That's a per-patient number, not a one-time fine.
California bans any AI system from representing itself as "professional mental health care." This matters for chatbots that handle sensitive patient conversations, which is exactly the territory fertility and reproductive health falls into.
Texas allows AI-assisted diagnostics but requires practitioner review before any AI recommendation reaches the patient. Penalties for non-compliance go up to $250,000.
These aren't hypothetical proposals. They're law. And the trend is toward more regulation, not less.
What this means for NaPro and RRM practices
Here's where this gets specific. If you run a NaProTechnology practice or a FertilityCare center and you're thinking about adding a chatbot to your website to handle patient questions, you should think carefully about what that chatbot might say.
Fertility care involves sensitive, high-stakes conversations. Patients ask about treatment options, medication protocols, cycle interpretation, pregnancy loss, and emotional wellbeing. A chatbot that's trained on general medical data doesn't understand the clinical framework of restorative reproductive medicine. It doesn't know the difference between NaPro surgical protocols and standard gynecological recommendations. It doesn't understand Creighton charting.
If that chatbot gives advice that contradicts your clinical approach, or worse, gives advice that harms a patient, who's liable? Your malpractice insurer may say it's not their problem. The chatbot developer's terms of service almost certainly disclaim medical liability. That leaves you.
This doesn't mean AI has no place in your practice's digital presence. It means there's a meaningful difference between AI that handles scheduling, answers insurance questions, or helps patients find the right page on your website, and AI that gives clinical guidance. The first category is low-risk. The second is uncharted territory.
The absence of precedent isn't the absence of risk
It's tempting to look at the current legal landscape and think there's nothing to worry about because nobody's lost a lawsuit yet. That's backwards. The reason nobody's lost a lawsuit yet is that the lawsuits are still working through the system, the regulatory frameworks are still being written, and the insurance industry is still figuring out how to price the risk.
When clarity arrives, it won't arrive gently. It'll arrive through a headline-making case, a state attorney general action, or an insurance exclusion that leaves a practice uncovered after a patient complaint.
For RRM practitioners, the safest position right now is straightforward: use AI where it helps with operations and visibility. Keep it away from clinical conversations. And read your malpractice policy's AI provisions before you deploy anything patient-facing. That document might have changed more recently than you think.
Frequently asked questions
Can a doctor be sued for following AI-generated medical advice?
Yes. Physicians remain legally responsible for clinical decisions regardless of the tools used to inform them. If a doctor accepts AI output without independent review and it leads to patient harm, standard malpractice liability applies.
Does malpractice insurance cover AI chatbot interactions?
Increasingly, no. Many malpractice insurers are adding AI-specific exclusions or requiring documented AI training as a coverage condition. If your practice deploys a patient-facing chatbot, check your current policy language before assuming coverage.
Are there state laws regulating AI in healthcare?
Yes. Over 250 AI bills were introduced across 47 states in 2025. Illinois prohibits unlicensed AI from independent therapeutic decisions ($10K per violation). California bans AI from representing itself as professional mental health care. Texas requires practitioner review of AI diagnostics ($250K penalties).
Is it safe for a medical practice to use a website chatbot?
It depends on what the chatbot does. Chatbots that handle scheduling, FAQs, or navigation are low-risk. Chatbots that provide clinical guidance, interpret symptoms, or recommend treatments carry significant legal exposure, especially if your malpractice policy excludes AI interactions.