Here's a number worth sitting with: 32% of U.S. adults have now used an AI chatbot for health information. That's one in three. Not tech enthusiasts. Not teenagers. One in three adults asking ChatGPT, Gemini, or Perplexity about their health.
The top reason? They want quick, immediate answers. 65% said speed was a major factor. Another 36% said they felt more comfortable asking a chatbot privately than bringing the question to a provider.
For fertility questions -- the deeply personal, often emotionally charged kind -- that privacy factor matters. People are asking chatbots about ovulation, about treatment options, about why they haven't conceived. And the chatbots answer confidently every single time.
What the chatbots actually get right
Let's be fair. AI isn't getting everything wrong.
A 2024 study in Human Reproduction found that ChatGPT scored about 80% accuracy on common fertility questions. Five of ten responses were rated high quality by independent fertility specialists, four were medium quality. On standard topics like the menstrual cycle, conception basics, and common risk factors, the information was generally solid.
A more recent 2025 study in Human Fertility called "Ctrl + Alt + Conceive" compared four platforms -- ChatGPT, Gemini, Copilot, and Perplexity -- across 37 fertility prompts. On menstrual cycle and conception topics, the platforms mostly agreed with each other and with established medical guidance.
So the basics? Covered reasonably well.
Where things fall apart
The same studies found that accuracy drops when questions get specific. The "Ctrl + Alt + Conceive" researchers noted that content on assisted reproductive technologies was the least accurate category across all four platforms. And that's for mainstream ART -- the topic with the most training data.
Now think about what happens when a patient asks about NaProTechnology. Or restorative reproductive medicine. Or fertility awareness-based methods.
The chatbots don't say nothing. That would almost be better. Instead, they give a surface-level answer that typically sounds like this: "NaProTechnology is a Catholic-based approach to fertility that uses natural family planning methods as an alternative to IVF."
That's not wrong in the way a hallucination is wrong. It's wrong in the way a Wikipedia stub is wrong. It reduces a comprehensive medical protocol -- one that involves hormonal evaluation, targeted surgery, cycle-based diagnostics, and individualized pharmacological treatment -- to a religious label and a two-sentence summary. The Creighton Model charting system, the NaPro surgical approach, the published outcomes data? Absent.
The chatbot gives the answer that matches what's most available online. And right now, the loudest online voices on fertility belong to IVF clinics with massive content budgets.
The misinformation problem is bigger than AI
This isn't just a chatbot issue. A 2025 narrative review in the Journal of General Internal Medicine identified 112 misleading claims about women's reproductive health circulating online. One-third of the misleading content attributed unsubstantiated risks to evidence-based interventions. Another 23% made medical recommendations that don't align with professional guidelines.
AI chatbots learn from this environment. They're trained on web content, and they reflect whatever that content contains -- gaps, biases, and all. When the available online information about restorative approaches is thin, the AI output about restorative approaches is thin.
That's not a conspiracy. It's a content problem.
Why this matters for your practice
Here's the thing practitioners need to understand: patients trust their doctors more than they trust AI. The 2025 Philips Future Health Index found that while 79% of healthcare professionals are optimistic about AI improving outcomes, only 59% of patients share that optimism. More than half of patients worry about losing the human element in care.
Your patients aren't replacing you with ChatGPT. But they are arriving at your office with information from ChatGPT already in their heads. And if that information frames NaProTechnology as a niche religious alternative rather than a medical discipline with clinical evidence, you're starting every new patient conversation with a correction instead of a foundation.
The content you publish shapes the answer they get
AI chatbots don't invent their knowledge of NaPro, RRM, or fertility awareness-based methods. They assemble it from whatever exists online. Published outcomes data, structured clinical content, properly marked-up practitioner pages, educational resources with clear evidence citations -- this is what AI pulls from.
When a FertilityCare center has a website with detailed, well-structured content about how the Creighton Model works, what a NaPro evaluation involves, and what the published success rates show, that content becomes available for AI to reference. When that content doesn't exist online, the chatbot fills the gap with whatever is available. Usually a surface-level description from a general health site.
The most practical thing an RRM practitioner can do about AI chatbot accuracy isn't to fight the technology. It's to feed it better source material.
That means publishing clinical content. Structuring it with proper schema markup so AI systems can parse it. Including citations to published research. Making sure your practice's website says more about what you do than a two-sentence summary on a directory listing.
The chatbots are going to answer the question whether your content is there or not. The only variable is what content they have to work with.
Frequently asked questions
How accurate are AI chatbots on fertility questions?
Studies show around 80% accuracy when ChatGPT answers common fertility questions using its training data. But accuracy varies by topic. Questions about mainstream IVF protocols score higher than questions about fertility awareness methods, NaProTechnology, or restorative approaches where training data is thinner.
What do AI chatbots say about NaProTechnology?
Most chatbots describe NaProTechnology in one or two sentences, usually framing it as a faith-based or Catholic alternative to IVF. They rarely mention the clinical protocol, the diagnostic workup, or the published outcomes data. The responses reflect whatever content is most prominent online, which is currently dominated by mainstream fertility clinic websites.
Can practitioners influence what AI chatbots tell patients?
Yes. AI chatbots build responses from publicly available web content. Practices and organizations that publish structured, evidence-based clinical content with proper schema markup are more likely to be cited by AI systems. The content that exists online shapes the answers patients receive.
Should I be worried about patients getting fertility advice from AI?
Concern is reasonable, but panic is not productive. Patients are already using these tools -- 32% of U.S. adults according to a 2026 KFF poll. The practical response is ensuring accurate, well-structured content about your approach exists online so chatbots have quality source material to draw from.