Substack's “wellness” doctors & AI transparency
Longevity literacy requires knowing when you're likely being fed machine-made content.

Substack has many problems these days. One of them that should concern us all is the increasing number of medical doctors — and those who call themselves doctors — who publish health and wellness content that feels like AI-generated boilerplate.
I’ve been contemplating whether to write this post for a while now. Will anyone care? After all, AI is everywhere, and even Substackers that shamelessly copy/paste ChatGPT twaddle below their bylines still manage to accumulate dozens of likes and comments. Maybe readers don’t mind being spoon fed machine-made thoughts.
I asked several doctors on Substack if their posts are AI-generated, in whole or in part. Not all of them responded. One isn’t a doctor at all.
But after speaking with a few doctors on Substack who don’t use AI to write their posts, I realized I wasn’t alone in my growing dismay at the preponderance of rote, AI-derived health & wellness information on Substack. One doctor who has a substantial Substack following sent me a note that concurred.
“The funny/sad thing is that people seem to like it,” this doctor told me. “Mindless to write, mindless to read. I feel like it steals potential intellectual capital from over-50 readers. And longevity literacy has got to be impaired, too.”
This holiday season, give the gift of Geezer magazine!
Geezer is a big, new 11x15 print-only magazine that explores the Gen X aging experience. Subscribe on the Geezer website, where AGING with STRENGTH subscribers can use code HOLIDAY15 to get 15% off an annual subscription.
Just tell us who your lucky Geezer is, and we’ll ship the magazine with a hand-written gift card. Discount code valid until 11:59pm on December 24.
Agreed. So I decided it was time to investigate how AI is now infiltrating what is starting to feel like the Substack health and wellness industrial complex. What I found, within and beyond Substack, was revealing and dismaying, compelling and crude — and probably just the tip of the AI iceberg that could sink all our ships, if we’re not careful.
Longevity literacy in the age of AI
Before I dive into the Substack doctors whose work I examined, let’s first take a step back to view the extent to which AI, in the hands of non-licensed, unscrupulous humans, is gaslighting us every day. (Last year, a Substack writer tackled the rising use of AI on the platform in a post that, according to a leading AI text analyzer, appears to be substantially AI generated.)
Esther Perel deepfakes posing as therapeutic advice
The three YouTube videos below are a prime example of why all of us need to develop what I call longevity literacy — without which we’re susceptible to believing, taking advice from or giving money to some of the worst people on the internet.
Each of these videos is from a different YouTube channel and claims to depict Esther Perel, the renowned psychotherapist, giving advice about a ridiculous subject. They each sound very much like Perel and, in the case of the first video below, look like her. But each is a deepfake — entirely AI generated.
The first video below may be the creepiest of the three.
For about the first minute, you wonder why Perel would speak so condescendingly about how sexually needy women over 60 respond to men. Then comes the creepy part: At the 00:55 mark, AI Esther Perel pauses and announces: “Hi, It’s Vernon again….” and asks you to like and subscribe to his channel.
The two videos below are deepfakes of Perel’s voice overlaid with her likeness, using almost the exact same transcript as the video above. The voices are different. Yet, many people will hear this garbage on YouTube and believe it’s actually Perel dispensing her own advice.
And so you wonder: If YouTube can’t or won’t fight off its own egregious, almost certainly defamatory AI enshittification, is resistance to global AI enshittification futile? If so, who cares if doctors on Substack publish information that may come straight from, or be filtered through, AI?
Why we should hold doctors to a better-than-AI standard
We should all care, I think, because doctors — including those on Substack — aren’t just content machines who can ethically trade in the trust readers give them for a warehouse of AI-derived insight and instruction. That’s not transparent at all.
“When a physician publishes health explanations, the MD (or equivalent) degree after their name signals training, experience, judgment and accountability,”
, a cardiologist, told me in response to my questions about doctors publishing content that appeared to be AI generated. “Readers reasonably assume that the analysis reflects the clinician’s actual expertise and that the author takes responsibility for the accuracy of the claims. When the substance is outsourced to AI, that trust is misplaced. It misleads the public and, in my opinion, crosses an ethical boundary.”Running Substack docs’ content through an AI text analyzer
With that in mind, I spent this week reading through several Substacks published by 6 health & wellness professionals — 5 medical doctors (and one who calls herself “doctor”) to get a sense of whether and how much they routinely use AI in their posts.
I ran selections of their Substack-published work through a leading AI text analyzer, GPTZero, which claims to be able to discern AI prose from human writing with high confidence, and which Wired magazine has previously used to analyze other Substack content for AI influence. I then reached out to each person, to ask if and how much they use AI in their Substacks.
The results of that reporting are below. Two of the six people, Howard Luks MD and Dr. Mohammad Ashori MD, responded at length to my questions; the four others did not. One doctor, Laurie Marbas MD, MBA, blocked me.
It’s important to say clearly that this exercise is one of transparency, not playing gotcha. I wish my reporting had turned up no evidence of AI at all. Since it did, I hope this post leads to a greater awareness among readers and, indeed, greater transparency among MDs who regularly lean on AI to generate health & wellness advice.
In no particular order:
Laurie Marbas MD, MBA, who publishes The Habit Healers on Substack
Dr. Marbas is notable for her large following — more than 21,000 subscribers — and for being a prolific producer of Substack content, posting lengthy, encouraging and prescriptive articles almost daily.
GPTZero indicates many posts are mostly or entirely AI generated, including this one on how cold water impacts health (here’s the GPTZero report on that post); and this one on movements that are better than workouts (here’s its GPTZero report). And several others.
The AI tool also indicated that the introduction of Dr. Marbas’s book, “Plant-based 101,” published in January, was likely AI generated.
Dr. Marbas did not respond to several requests for comment.
, a cardiac surgeon who publishes Longevity Docs on Substack
Dr. Luu is an entrepreneur cultivating Longevity Docs’ physician-only audience and who hosts large events in several cities.
GPTZero indicates many posts, which tend to be extremely long, detailed and lavishly designed with customized photos and graphics, are mostly or partly AI generated, including this one on the “Biomarker gold rush” (GPT Zero report); this one on electronic medical records, peptides and other topics (GPTZero report); and this one on a “longevity clinical trial network” (GPTZero report).
Dr. Luu did not respond to a request for comment.
, a cardiologist who publishes eponymously on Substack
Dr. Kelly publishes a mix of first-person posts, including this recent one about an 80-year-old patient’s extraordinary VO2 max results, which the eye test and the online AI tool each indicate is human written; and another on why people die from heart disease, which suffers from tell-tale staccato AI tropes such as:
”You should not die of cardiovascular disease before old age.
Not anymore.
Not with what we know.
Not with the tools we have.”(GPTZero rated the post as 40% AI generated)
And another, more personal story of “when fear becomes physical” that unfolds with similarly common AI phrases and cadence (GPTZero rates it as mostly AI).
Dr. Kelly didn’t reply to requests for comment.
, an orthopedic surgeon who writes Built to Move, Born to Heal: Notes on Midlife Fitness on Substack
Dr. Luks has been writing about his work, patients and thoughts on fitness and health for a few decades. He wrote a book, “Longevity…simplified,” and as he told me in a Substack chat, “On my topics, I’ve forgotten more than some will ever know.” Given his experience, he will sometimes use AI, he said, to ensure he’s writing at a level that his non-technical readers most appreciate.
On Sunday, he published a post on the importance of not slowing down with age, and when I read it, I became quite convinced it was almost pure AI. (So did GPTZero.) The purple prose. The tortured metaphors. The litany of three-word paragraphs. However, Dr. Luks, in an email response to my questions, said he wrote the majority of the post himself. “On short-form posts I will use an AI to edit it…and in some instances to soften it up a bit,” he said. “I wrote the entire lead in, the take home message and the rationale and reason for the post. I’ve listened intently to thousands of patients over the years detail their reasons for not doing X, Y and Z. I’ve had many patients come back a decade later and comment how they wish they had listened to me years before. Those stories have shaped many of the messages I try to convey. If, at times I feel that an LLM can enhance the message, so be it.”1
I take Dr. Luks at his word, though I have to admit it’s hard to read this passage and not raise an AI-brow:
Comfort can be a coffin.
Excuses are chains.
Your body doesn’t negotiate, and your future will not wait.Cry if you must.
Fight if you can.
But fight.Because time will take everything you refuse to earn…
and it does not give refunds.
I appreciate Dr. Luks’ willingness to discuss his writing process and thoughts on AI with me. And, as a guy with seven orthopedic surgeries behind him, I hope I’ll be able to continue the AI-in-medicine conversation with him.
, a family physician and health coach on Substack
Dr. Ashori is “an MD turned health coach” and runs an online coaching business, as well as a health-coaching YouTube channel. On Substack, he tackles very practical subjects with mass appeal, such as how to avoid waking up in the middle of the night (GPTZero rates it as almost entirely human written) and how to fix back pain, which the AI tool also rates as penned by a real person.
Not much of Dr. Ashori’s writing struck me as clearly generated by AI, but in the interest of creating a broad canvas of doctors writing on Substack, I asked him if AI plays a role in what he publishes. Here’s his response, via Substack chat:
“The practice of writing itself is what I’m after, without it the brain will atrophy quickly,” Dr. Ashori said. “So it’s important that I think about each piece creatively and grammatically. That said, some paragraphs are too verbose or I just can’t get my meaning across. Once I have something written I’ll pick the worst paragraph and ask my AI task manager to rewrite it only for clarity. Most of the time it’s great and I don’t need to make changes. When I have a complete block for an article I want to write I have Chatty [his pet name for ChatGPT] give me some suggested outlines. Usually only with bullet points so I don’t lose the cranial task of coming up with my own content.”
His comment reminds me of something , creator of Brain Health Kitchen on Substack (and author of an excellent book with the same title), told me recently: That sometimes she congratulates her readers for making it to the end of a particularly thorny topic.
“I remind them they are building cognitive reserve,” Dr. Fenn told me. “Just hanging in there and trying to grasp complex topics is one way to build brain resilience. So I want them to feel good about reading stuff that’s hard even though it’s more of an effort than plowing through easy, AI-generated stuff.”
, who writes The Strong Doc on Substack
She earned a degree in dentistry from an Indian university, according to her LinkedIn page. She is not a medical doctor, which may not be apparent to readers who see the her name preceded by “Dr”, not to mention the name of the Substack itself.
Most of the posts I reviewed, including one on “invisible fat that’s killing millions,” and another on uric acid, seem plausibly written by AI. (GPTZero agrees.)
But it seems there are larger ethical concerns about a Substack that gives advice and insight into medical topics if the author isn’t a physician.
In summary
GPTZero’s analyses are not dispositive. My hunches could be entirely off base. Moreover, my reporting is by no means exhaustive, and, as stated above, it’s not intended to embarrass or play gotcha.
It is, however, a decent argument for greater doctor transparency on Substack.
AI-generated content is likely playing a bigger role in Substack’s increasingly crowded health & wellness category; doctors are among those using AI to produce a growing body of advice and opinion — often without telling their readers. My personal antennae go up when I observe practicing physicians posting lengthy, detailed articles daily or several times a week, when most doctors I know struggle just to see their patients and record their clinical notes.
One last thing some readers may be wondering about….
How I use AI and disclose it on AGING with STRENGTH
I use AI almost daily the same way a lot of other Substack writers do: to ask questions, gather information, and tease out ideas that might be worth writing about. The images topping most AGING with STRENGTH posts (but not this one) are courtesy of Midjourney.ai. My posts evaluating longevity supplements and wearable fitness trackers were each created with research help from AI, which I disclosed in each post.
When I use AI in my writing or reporting, I make a point of saying so. That way, the reader can make an informed decision about the information I present. That’s basic communication transparency.
We all want doctors, including those writing about health and wellness on Substack, to be equally transparent, don’t we?
Dr. Luks went on to add, “Other physicians who are not used to writing may find it easier to have an LLM figure out their content calendar and write the entire post. LLMs get medical facts correct, most of the time. They fail in clinical situations broadly. So there is a role for them, and physicians who do not use them in their clinical practice will likely fall out of certain clinical guidelines as their use mainstreams.”



Thank you!
I so agree, lol. I may not pump out a ton of posts but the words are all mine (some of the pictures, maybe not), and they are based on research. Thanks for keeping it real, Paul!