How ChatGPT selects its sources
ChatGPT operates in two modes. In its default mode, it draws on knowledge baked into the model during training -- a snapshot of the internet as it existed months ago, filtered for quality. In browse mode, it retrieves live web results, but the retrieval isn't a simple keyword search. It queries Bing, evaluates the results, and selects which sources to cite based on a hierarchy of trust signals.
Source authority ranks highest. Institutional domains -- academic medical centers, professional associations, peer-reviewed journals -- sit at the top of the trust hierarchy. Below them are established health publishers. Below those are individual practice websites. This isn't a conspiracy against small practices. It's a reflection of how the model was trained to assess reliability in health contexts, where misinformation carries real consequences.
Recency matters too. When ChatGPT retrieves sources in browse mode, it weights recently updated content more heavily. A page from 2023 competes at a disadvantage against a page updated last month, even if both say exactly the same thing. And semantic matching determines relevance -- the model is reading for meaning, not scanning for keywords. A page that clearly and specifically addresses the patient's question gets priority over one that mentions the right terms but buries the answer.
What makes a source citable
AI systems can't call your office, verify your board certification, or assess your clinical outcomes. They can only evaluate what's on the page in front of them. The sources that get cited share a few characteristics.
Structured content with clear headings. When ChatGPT is assembling an answer, it's extracting specific claims from specific sections. A page with well-organized headings and focused paragraphs gives the model clean material to work with. A wall of text about your practice philosophy doesn't.
Identifiable authorship. A page that attributes its clinical content to a named, credentialed physician carries more weight than one with no author at all. This mirrors how Google's helpful content system already works -- but AI systems lean on it even harder because they're synthesizing answers, not just ranking links. They need to justify their sources.
Factual density. Pages that make specific, verifiable claims -- with conditions named, outcomes cited, and methods described -- give AI systems something concrete to reference. General marketing language ("We offer compassionate, personalized care") isn't wrong, but it's not citable. There's nothing for the model to extract and attribute.
Where practitioners appear and where they do not
Ask ChatGPT about endometriosis treatment options and you'll likely see references to Mayo Clinic, Cleveland Clinic, ACOG, and peer-reviewed studies. Ask it to name a specific NaProTechnology practitioner in your city and the answer gets much thinner.
This isn't because individual practitioners are excluded by design. It's because the citation graph is built from the sources that publish the most structured, frequently updated, widely referenced content. Large institutions do that at scale. A solo practitioner's five-page website, even if clinically excellent, generates very few of the signals that place a source inside that graph.
Directory profiles help close this gap. When a practitioner appears in a well-structured directory -- one with schema markup, consistent NAP data, and specialty classifications -- that directory page becomes a proxy for the practitioner's identity in the citation graph. The practitioner may not be cited directly, but the directory that lists them can be. This is why strong profiles on Healthgrades, Doximity, and specialty-specific directories carry weight in AI visibility that they didn't carry in traditional SEO.
What this means for a small practice
The path into the citation graph isn't advertising spend. Paid ads don't influence what ChatGPT retrieves. The path runs through the same things patients already value: expertise, clarity, and a track record of publishing useful information.
Concretely, this means a practice that publishes even one well-structured clinical page per quarter -- with a named author, specific conditions addressed, and current references cited -- is building citation graph equity that a practice relying solely on a brochure site isn't. It means keeping directory profiles complete and consistent. It means writing in a way that answers the questions patients are actually asking AI systems, not in a way that sounds impressive to colleagues.
None of this requires a large budget or a marketing team. It requires the same discipline that good clinical documentation requires: be specific, be current, be attributable. The citation graph isn't a mystery. It rewards the same qualities that make a practitioner trustworthy in any context -- expertise, clarity, and credibility. The difference is that now a machine is reading for those qualities too, and it can only find what's written down.
Frequently asked questions
How does ChatGPT decide which practitioners to mention in a response?
ChatGPT weighs source trust, factual density, and the consistency of practitioner identity signals across multiple sources. A practitioner mentioned in medical directories, referenced in credible health content, and identified with credentials in structured page headings accumulates the citation signals that make a mention likely in relevant queries.
What is browse mode and how does it affect practitioner visibility?
When ChatGPT uses browse mode, it retrieves live web pages at query time and can cite current content. Pages with clear structured headings, an identifiable named author with credentials, and factual claims are more likely to be quoted. For NaProTechnology and RRM practitioners, this means well-structured service and about pages carry real weight.
Why do directory profiles matter for ChatGPT visibility?
Directory profiles on sites like Healthgrades, Doximity, and specialty directories act as corroborating identity signals. When ChatGPT sees a practitioner's name, specialty, and location appear consistently across multiple independent sources, it assigns higher confidence that the practitioner is a real, credentialed expert worth mentioning.
Does having more content mean ChatGPT will mention me more?
Volume alone is not the driver. Factual density matters more than length. Content that states verifiable claims -- credentials, training, clinical approach, outcomes -- is more citable than content that is descriptive or promotional. One well-structured page with named authorship outperforms ten pages of general wellness writing.
What is the single most effective change a practitioner can make to improve AI citation rates?
Ensure every page of clinical content carries an identifiable byline with credentials. A name and degree visible to a crawler is the minimum signal for authorship attribution. Combined with structured headings and at least one directory profile, this creates the citation baseline that AI search systems use to surface NaPro and RRM practitioners in relevant responses.