28 Comments
User's avatar
Axel F Sigurdsson MD, PhD's avatar

I’ve been writing about cardiology and medicine online since 2012 (docsopinion.com). More than 200 articles. All of them written long before AI was more than a science-fiction curiosity. When these tools finally arrived, of course I tried them. Who wouldn’t? They’re useful — in the same way a good editor is useful. They point out where a sentence wanders off, where a structure sags, or where I’ve repeated myself because I wrote the paragraph too late in the evening.

But there is a difference between using AI as a chisel and using it as the sculptor.

For a physician, that difference is an ethical one.

My own rule is simple:

If the idea comes from my clinical experience, my judgment, my patients, or the long arguments I’ve had with myself — then AI may help me clean the windows.

If the idea comes from AI — then it’s no longer my work.

Readers deserve to know which they’re getting. An MD after a name still carries an assumption of responsibility. That responsibility doesn’t outsource well.

The point about AI detectors is also important. I’ve run old articles of mine — written years before these models existed — through several detectors. Some were declared “70% AI.” If that is true, then I must have been astonishingly ahead of the curve. More likely, the detectors simply aren’t very good. They measure surface patterns, not authorship.

But the broader issue you raise is real: a growing amount of health-and-wellness writing now reads like it was extruded from the same machine, with the same tone, the same breathless certainty, the same lack of friction. Readers feel it, even if they can’t always name it.

So yes — thank you for bringing this into the open. Transparency matters. Not to shame anyone, but to protect the one thing that makes writing worth reading: a human mind wrestling with something that isn’t simple.

In medicine, that used to be the minimum standard. It should still be.

For clarity: I wrote this comment myself. I asked ChatGPT to help tidy a few sentences. The thinking is mine. The polishing was outsourced.

James H. Stein, MD's avatar

Really well said. It’s a very useful tool for copy editing and improving readability. Where are the ideas come from and the arguments are made is vital. I’ve also run things through detectors and have been told they’re 46% AI generated when they were written well before the advent of LLMs. Sometimes it’s just a clear writing. Some of my articles have been edited by medical journals before being published. They come out sounding a little stilted and sound like AI. Anyways, I’m at meandering so clearly AI didn’t help me this time - thanks for your comment.

Good Medicine's avatar

Really interesting. Love Dr. Annie Fenn, too! She does amazing work in the world and here on Substack!

Annie Fenn, MD's avatar

Hey Chere! Good to see you here on Paul's Substack! Thank you for the kind comment. I appreciate you!!

Good Medicine's avatar

Dr. Annie

🤗💕🧠

Jenny Arnez's avatar

Thank you for this post. It’s very helpful. I’m sad to hear about Dr. Marbas as I’ve appreciated her but I did wonder how she was able to publish so much content. AI is a useful tool but I don’t want it to replace actual writing and especially not medical advice.

Renee Feltes's avatar

Your article made me think of how many senior citizens like myself, automatically attribute everything written about health comes from a bona fide, well documented professional, never stopping to think it might just be AI churning out pages of information that is produced for the masses. You gave me a timely wake up call. Thank you.

Lauren Petkin's avatar

Wow Paul! Who knew?? I love when you said this: “My personal antennae goes up when I observe practicing physicians posting lengthy, detailed articles daily or several times a week, when most doctors I know struggle just to see their patients and record their clinical notes. “ Exactly! I see some of my family law colleagues frequently posting on social media and I wonder how they have time to craft and email weekly newsletters when I am just trying to meet deadlines!

Paul von Zielbauer's avatar

Lauren, and now you know: it's AI. Maybe. It's going to only get worse, I'm afraid.

Dr. Ashori MD's avatar

Great article. Your discussion highlighted for me the importance of expressing my clinical experience in my own words. I never thought about from the perspective of trust but it's a valid argument. In the end the most important goal is to communicate something of value to the patient so that they can improve their health. There are many ways of doing it but until we have an established social custom, it's necessary to be transparent.

Paul von Zielbauer's avatar

Dr. Ashori, i think that's right. More transparency is better. That's design thinking, building your product to make it work best for the customer. Thank you for speaking with me for this article.

MARK ABRAHAMSON's avatar

Paul…great investigative piece which exposes a larger societal problem…..Medical paternalism. We see MD after some’s name and we automatically defer to his or her opinion. “Trust me…I’m a Doctor”. And it’s not just doctors but pretty much any highly educated professional. If everyone reading this would take a moment and reflect on the professionals they’ve encountered in their careers …how many of them did you feel were really good at their job. I’ll bet it’s really a small number or percentage. Unfortunately, the advent of AI is only going to expose us to more so-called experts. Bottom line, we need to verify everything proffered on the Web.

Paul von Zielbauer's avatar

Mark, I'm glad it made the point clearly. This was difficult to write, for several reasons. You want to expose the problem, and create some accountability. But you also want to be fair to those whose work you're publicly reviewing, because no AI text tool is 100% accurate. So I tried to work with at least a preponderance of evidence and suggest what it may tell us. I hope people read and respond to it with their thoughts. Thanks for being such a close reader yourself.

MARK ABRAHAMSON's avatar

Paul….. you’re always a pleasure to read. You post some very thought provoking issues that are seldom black and white. It deftly made me stop and think about the subject matter. You put the information out there and allow us to draw our own conclusions. Bravo.

Mark

James H. Stein, MD's avatar

Paul, thanks for writing this and for quoting me. It is very well written with the expert clarity I'd expect from an expert writer. I am going to post something on this topic from a physician standpoint in the next day or two - thanks for inspiring me to do that. And thanks for introducing the idea of "longevity literacy." It's an important idea that has legs - long ones. I also want to introduce the ideas that (i) AI detectors are not accurate and (ii) people already are changing their writing style to not sound like AI (I do it - I know longer use -em dashes and limit antithesis pattern). I suspect GPT Zero may not be very accurate. I fed it a paper of mine from pre-GPT and one I used GPT to copyedit only and both came back 46% AI :) - so I am skeptical of that one. But when I asked Chat GPT 5.1 and Gemini if the articles were AI, they said "absolutely not" for one and "minor use" for the other. The Substack you featured about waking up at 2 AM has many AI fingerprints. Chat GPT said "This piece has a very high likelihood of being AI-assisted or AI-generated. The odds are well above 80 percent based on structure, tone, linguistic fingerprints, and the style of synthetic “clinical coaching” prose that is now common in AI-augmented newsletters. It could be a lightly edited draft written by a human using an AI outline, but it does not read like naturally produced expert writing." And the ones you highlighted by Dr. Luks and Dr. Kelly - Chat GPT (FWIW) said 100% AI. Ugh - sorry I wrote so much.

L.G. O'Connor's avatar

I so agree, lol. I may not pump out a ton of posts but the words are all mine (some of the pictures, maybe not), and they are based on research. Thanks for keeping it real, Paul!

Paul von Zielbauer's avatar

L.G., quality over quantity, for sure. And human-written over AI-prompted, most definitely. Thanks for posting your thoughts.

Mark Caley's avatar

AI writing needs to be banned from Substack. In particular when it comes to medical information that people will inevitably make decisions on and/or act on there is no place for AI. In the medical field we need truth, honesty, nuance, research, and clear thought and disclosed opinion on the subject. I don’t care if the doctor is a poet. I want information that helps me make critical decisions. Ban AI Substack!

Your Nextdoor PCP's avatar

This is an important and frankly overdue conversation! As clinicians, we trade on a very specific kind of trust: when readers see “MD”, they reasonably assume the piece reflects lived clinical judgment, careful sourcing, and accountability, not just fluent prose. Appreciate that you’re framing this as transparency rather than “gotcha”. AI can absolutely be a legitimate tool (editing for clarity, outlining, literature triage), but there’s a bright ethical line between assistance and outsourcing the core thinking. If substantive content is generated by an LLM, readers deserve to know; just as they deserve to know about conflicts of interest or sponsorship.

Two nuance points feel especially relevant:

1. Detectors aren’t arbiters of truth. Tools like GPTZero can be suggestive, but they’re imperfect; the standard should be author disclosure, not probabilistic policing.

2. Transparency protects everyone. It helps readers calibrate confidence, and it protects good-faith physicians who use AI responsibly (e.g., for copyediting) from being lumped in with content farms.

A simple norm would go a long way: a one-line disclosure at the end (“AI used for copyediting/outline only” vs “AI-generated draft with physician review”), plus citations for key claims. That’s not anti-AI; it’s pro-integrity. In a health ecosystem already saturated with misinformation, clarity about who is speaking and how the message was made is part of patient safety.

Paul von Zielbauer's avatar

Agreed on pretty much all counts, especially the idea that, as you put it, transparency protects everyone.

Annie Fenn, MD's avatar

Hi Paul, thanks for writing this and for including my comments on AI. I agree that MDs writing about science should be held to a high standard. And I am always game for bragging about my readers! I am super proud when we get to the end of a mini-series of a complex topic and they show how much they've learned through their questions and comments.

Your topic gets at an important concept in brain health--building cognitive reserve. These are all the neural pathways we cultivate as life-long learners that help make the brain resilient to dementia. I don't know if this has been studied but I suspect reading AI-generated, watered-down content does nothing to build cognitive reserve. It's like the junk food of the Substack world.

Neural Foundry's avatar

Solid investigative work. The longevity literacy framing is spot-on: when readers can't tell if a doctor struggled with an idea or just ran it through GPTZero, the whole trust model breaks. The Esther Perel deepfakes are wild, but the more insidious problem is the boilerplate wellness advice that passes GPT detction tools and still gets engagement.

Paul von Zielbauer's avatar

I would hope that, just as with the (bottomless) danger of unchecked social media, AI boilerplate masquerading as a form of thoughtful wellness guidance will also become known as a dangerous but cheap shortcut, and patients/readers will ask for better. For now, I think many readers aren't recognizing the boilerplate for what it is.

Brian Foley's avatar

Excellent article, thanks

Peg's avatar

Thank you!

Paul von Zielbauer's avatar

Peg, I'm glad this is useful/helpful.

Paul von Zielbauer's avatar

You should try a little harder than that, don’t you think?