I’ve been writing about cardiology and medicine online since 2012 (docsopinion.com). More than 200 articles. All of them written long before AI was more than a science-fiction curiosity. When these tools finally arrived, of course I tried them. Who wouldn’t? They’re useful — in the same way a good editor is useful. They point out where a sentence wanders off, where a structure sags, or where I’ve repeated myself because I wrote the paragraph too late in the evening.
But there is a difference between using AI as a chisel and using it as the sculptor.
For a physician, that difference is an ethical one.
My own rule is simple:
If the idea comes from my clinical experience, my judgment, my patients, or the long arguments I’ve had with myself — then AI may help me clean the windows.
If the idea comes from AI — then it’s no longer my work.
Readers deserve to know which they’re getting. An MD after a name still carries an assumption of responsibility. That responsibility doesn’t outsource well.
The point about AI detectors is also important. I’ve run old articles of mine — written years before these models existed — through several detectors. Some were declared “70% AI.” If that is true, then I must have been astonishingly ahead of the curve. More likely, the detectors simply aren’t very good. They measure surface patterns, not authorship.
But the broader issue you raise is real: a growing amount of health-and-wellness writing now reads like it was extruded from the same machine, with the same tone, the same breathless certainty, the same lack of friction. Readers feel it, even if they can’t always name it.
So yes — thank you for bringing this into the open. Transparency matters. Not to shame anyone, but to protect the one thing that makes writing worth reading: a human mind wrestling with something that isn’t simple.
In medicine, that used to be the minimum standard. It should still be.
For clarity: I wrote this comment myself. I asked ChatGPT to help tidy a few sentences. The thinking is mine. The polishing was outsourced.
Really well said. It’s a very useful tool for copy editing and improving readability. Where are the ideas come from and the arguments are made is vital. I’ve also run things through detectors and have been told they’re 46% AI generated when they were written well before the advent of LLMs. Sometimes it’s just a clear writing. Some of my articles have been edited by medical journals before being published. They come out sounding a little stilted and sound like AI. Anyways, I’m at meandering so clearly AI didn’t help me this time - thanks for your comment.
Thank you for this post. It’s very helpful. I’m sad to hear about Dr. Marbas as I’ve appreciated her but I did wonder how she was able to publish so much content. AI is a useful tool but I don’t want it to replace actual writing and especially not medical advice.
Your article made me think of how many senior citizens like myself, automatically attribute everything written about health comes from a bona fide, well documented professional, never stopping to think it might just be AI churning out pages of information that is produced for the masses. You gave me a timely wake up call. Thank you.
Wow Paul! Who knew?? I love when you said this: “My personal antennae goes up when I observe practicing physicians posting lengthy, detailed articles daily or several times a week, when most doctors I know struggle just to see their patients and record their clinical notes. “ Exactly! I see some of my family law colleagues frequently posting on social media and I wonder how they have time to craft and email weekly newsletters when I am just trying to meet deadlines!
Great article. Your discussion highlighted for me the importance of expressing my clinical experience in my own words. I never thought about from the perspective of trust but it's a valid argument. In the end the most important goal is to communicate something of value to the patient so that they can improve their health. There are many ways of doing it but until we have an established social custom, it's necessary to be transparent.
Dr. Ashori, i think that's right. More transparency is better. That's design thinking, building your product to make it work best for the customer. Thank you for speaking with me for this article.
Paul…great investigative piece which exposes a larger societal problem…..Medical paternalism. We see MD after some’s name and we automatically defer to his or her opinion. “Trust me…I’m a Doctor”. And it’s not just doctors but pretty much any highly educated professional. If everyone reading this would take a moment and reflect on the professionals they’ve encountered in their careers …how many of them did you feel were really good at their job. I’ll bet it’s really a small number or percentage. Unfortunately, the advent of AI is only going to expose us to more so-called experts. Bottom line, we need to verify everything proffered on the Web.
Mark, I'm glad it made the point clearly. This was difficult to write, for several reasons. You want to expose the problem, and create some accountability. But you also want to be fair to those whose work you're publicly reviewing, because no AI text tool is 100% accurate. So I tried to work with at least a preponderance of evidence and suggest what it may tell us. I hope people read and respond to it with their thoughts. Thanks for being such a close reader yourself.
Paul….. you’re always a pleasure to read. You post some very thought provoking issues that are seldom black and white. It deftly made me stop and think about the subject matter. You put the information out there and allow us to draw our own conclusions. Bravo.
Paul, thanks for writing this and for quoting me. It is very well written with the expert clarity I'd expect from an expert writer. I am going to post something on this topic from a physician standpoint in the next day or two - thanks for inspiring me to do that. And thanks for introducing the idea of "longevity literacy." It's an important idea that has legs - long ones. I also want to introduce the ideas that (i) AI detectors are not accurate and (ii) people already are changing their writing style to not sound like AI (I do it - I know longer use -em dashes and limit antithesis pattern). I suspect GPT Zero may not be very accurate. I fed it a paper of mine from pre-GPT and one I used GPT to copyedit only and both came back 46% AI :) - so I am skeptical of that one. But when I asked Chat GPT 5.1 and Gemini if the articles were AI, they said "absolutely not" for one and "minor use" for the other. The Substack you featured about waking up at 2 AM has many AI fingerprints. Chat GPT said "This piece has a very high likelihood of being AI-assisted or AI-generated. The odds are well above 80 percent based on structure, tone, linguistic fingerprints, and the style of synthetic “clinical coaching” prose that is now common in AI-augmented newsletters. It could be a lightly edited draft written by a human using an AI outline, but it does not read like naturally produced expert writing." And the ones you highlighted by Dr. Luks and Dr. Kelly - Chat GPT (FWIW) said 100% AI. Ugh - sorry I wrote so much.
I so agree, lol. I may not pump out a ton of posts but the words are all mine (some of the pictures, maybe not), and they are based on research. Thanks for keeping it real, Paul!
Solid investigative work. The longevity literacy framing is spot-on: when readers can't tell if a doctor struggled with an idea or just ran it through GPTZero, the whole trust model breaks. The Esther Perel deepfakes are wild, but the more insidious problem is the boilerplate wellness advice that passes GPT detction tools and still gets engagement.
I would hope that, just as with the (bottomless) danger of unchecked social media, AI boilerplate masquerading as a form of thoughtful wellness guidance will also become known as a dangerous but cheap shortcut, and patients/readers will ask for better. For now, I think many readers aren't recognizing the boilerplate for what it is.
I’ve been writing about cardiology and medicine online since 2012 (docsopinion.com). More than 200 articles. All of them written long before AI was more than a science-fiction curiosity. When these tools finally arrived, of course I tried them. Who wouldn’t? They’re useful — in the same way a good editor is useful. They point out where a sentence wanders off, where a structure sags, or where I’ve repeated myself because I wrote the paragraph too late in the evening.
But there is a difference between using AI as a chisel and using it as the sculptor.
For a physician, that difference is an ethical one.
My own rule is simple:
If the idea comes from my clinical experience, my judgment, my patients, or the long arguments I’ve had with myself — then AI may help me clean the windows.
If the idea comes from AI — then it’s no longer my work.
Readers deserve to know which they’re getting. An MD after a name still carries an assumption of responsibility. That responsibility doesn’t outsource well.
The point about AI detectors is also important. I’ve run old articles of mine — written years before these models existed — through several detectors. Some were declared “70% AI.” If that is true, then I must have been astonishingly ahead of the curve. More likely, the detectors simply aren’t very good. They measure surface patterns, not authorship.
But the broader issue you raise is real: a growing amount of health-and-wellness writing now reads like it was extruded from the same machine, with the same tone, the same breathless certainty, the same lack of friction. Readers feel it, even if they can’t always name it.
So yes — thank you for bringing this into the open. Transparency matters. Not to shame anyone, but to protect the one thing that makes writing worth reading: a human mind wrestling with something that isn’t simple.
In medicine, that used to be the minimum standard. It should still be.
For clarity: I wrote this comment myself. I asked ChatGPT to help tidy a few sentences. The thinking is mine. The polishing was outsourced.
Really well said. It’s a very useful tool for copy editing and improving readability. Where are the ideas come from and the arguments are made is vital. I’ve also run things through detectors and have been told they’re 46% AI generated when they were written well before the advent of LLMs. Sometimes it’s just a clear writing. Some of my articles have been edited by medical journals before being published. They come out sounding a little stilted and sound like AI. Anyways, I’m at meandering so clearly AI didn’t help me this time - thanks for your comment.
Really interesting. Love Dr. Annie Fenn, too! She does amazing work in the world and here on Substack!
Thank you for this post. It’s very helpful. I’m sad to hear about Dr. Marbas as I’ve appreciated her but I did wonder how she was able to publish so much content. AI is a useful tool but I don’t want it to replace actual writing and especially not medical advice.
Your article made me think of how many senior citizens like myself, automatically attribute everything written about health comes from a bona fide, well documented professional, never stopping to think it might just be AI churning out pages of information that is produced for the masses. You gave me a timely wake up call. Thank you.
Wow Paul! Who knew?? I love when you said this: “My personal antennae goes up when I observe practicing physicians posting lengthy, detailed articles daily or several times a week, when most doctors I know struggle just to see their patients and record their clinical notes. “ Exactly! I see some of my family law colleagues frequently posting on social media and I wonder how they have time to craft and email weekly newsletters when I am just trying to meet deadlines!
Lauren, and now you know: it's AI. Maybe. It's going to only get worse, I'm afraid.
Great article. Your discussion highlighted for me the importance of expressing my clinical experience in my own words. I never thought about from the perspective of trust but it's a valid argument. In the end the most important goal is to communicate something of value to the patient so that they can improve their health. There are many ways of doing it but until we have an established social custom, it's necessary to be transparent.
Dr. Ashori, i think that's right. More transparency is better. That's design thinking, building your product to make it work best for the customer. Thank you for speaking with me for this article.
Paul…great investigative piece which exposes a larger societal problem…..Medical paternalism. We see MD after some’s name and we automatically defer to his or her opinion. “Trust me…I’m a Doctor”. And it’s not just doctors but pretty much any highly educated professional. If everyone reading this would take a moment and reflect on the professionals they’ve encountered in their careers …how many of them did you feel were really good at their job. I’ll bet it’s really a small number or percentage. Unfortunately, the advent of AI is only going to expose us to more so-called experts. Bottom line, we need to verify everything proffered on the Web.
Mark, I'm glad it made the point clearly. This was difficult to write, for several reasons. You want to expose the problem, and create some accountability. But you also want to be fair to those whose work you're publicly reviewing, because no AI text tool is 100% accurate. So I tried to work with at least a preponderance of evidence and suggest what it may tell us. I hope people read and respond to it with their thoughts. Thanks for being such a close reader yourself.
Paul….. you’re always a pleasure to read. You post some very thought provoking issues that are seldom black and white. It deftly made me stop and think about the subject matter. You put the information out there and allow us to draw our own conclusions. Bravo.
Mark
Paul, thanks for writing this and for quoting me. It is very well written with the expert clarity I'd expect from an expert writer. I am going to post something on this topic from a physician standpoint in the next day or two - thanks for inspiring me to do that. And thanks for introducing the idea of "longevity literacy." It's an important idea that has legs - long ones. I also want to introduce the ideas that (i) AI detectors are not accurate and (ii) people already are changing their writing style to not sound like AI (I do it - I know longer use -em dashes and limit antithesis pattern). I suspect GPT Zero may not be very accurate. I fed it a paper of mine from pre-GPT and one I used GPT to copyedit only and both came back 46% AI :) - so I am skeptical of that one. But when I asked Chat GPT 5.1 and Gemini if the articles were AI, they said "absolutely not" for one and "minor use" for the other. The Substack you featured about waking up at 2 AM has many AI fingerprints. Chat GPT said "This piece has a very high likelihood of being AI-assisted or AI-generated. The odds are well above 80 percent based on structure, tone, linguistic fingerprints, and the style of synthetic “clinical coaching” prose that is now common in AI-augmented newsletters. It could be a lightly edited draft written by a human using an AI outline, but it does not read like naturally produced expert writing." And the ones you highlighted by Dr. Luks and Dr. Kelly - Chat GPT (FWIW) said 100% AI. Ugh - sorry I wrote so much.
I so agree, lol. I may not pump out a ton of posts but the words are all mine (some of the pictures, maybe not), and they are based on research. Thanks for keeping it real, Paul!
L.G., quality over quantity, for sure. And human-written over AI-prompted, most definitely. Thanks for posting your thoughts.
Solid investigative work. The longevity literacy framing is spot-on: when readers can't tell if a doctor struggled with an idea or just ran it through GPTZero, the whole trust model breaks. The Esther Perel deepfakes are wild, but the more insidious problem is the boilerplate wellness advice that passes GPT detction tools and still gets engagement.
I would hope that, just as with the (bottomless) danger of unchecked social media, AI boilerplate masquerading as a form of thoughtful wellness guidance will also become known as a dangerous but cheap shortcut, and patients/readers will ask for better. For now, I think many readers aren't recognizing the boilerplate for what it is.
Excellent article, thanks
Thank you!
Peg, I'm glad this is useful/helpful.