
Am I thinking, before I share?
- stephjoseph1976
- Nov 30, 2025
- 4 min read
Social Media, AI, and the Mental Health Impact We Don’t Talk About Enough.
Every day, my social media feeds are flooded with posts, videos and “advice” that promise to make me happier, calmer, healthier or more productive. And while some of it is well-intentioned, a lot of it isn’t grounded in reality or in any kind of professional guidance.
As someone who cares deeply about mental health (my own and others’), I’m starting to question the impact this endless swirl of content is having on us. Because lately, there’s been a renewed spotlight on how misinformation, wellness scams, and AI-generated guidance shape not just our opinions, but our wellbeing.
Regulators are concerned. Mental-health professionals are concerned.
And honestly… so am I.
So the question I keep coming back to is:
"Am I thinking carefully enough before I consume or share mental health content online?"
If social media is becoming a mental health resource… is that safe?
A lot of people, especially younger people, are turning to social media for mental health answers. And that worries me.
Studies show that 52% of UK adults now get news from social media, including health information. Many simply trust what looks or sounds convincing. Younger users especially rely on influencers for advice: research from Ipsos shows about half of 16–34-year-olds trust influencer content, including content related to wellbeing.
But mental health isn’t just another trending topic. It’s personal. It’s complex. It’s often fragile.
And I have to ask myself:
Do I stop long enough to question whether the advice I’m seeing is evidence-based?
Am I able to tell the difference between a licensed therapist and an influencer who’s good with a ring light?
Am I being informed… or emotionally pulled along by someone else’s narrative?
Because the line between “helpful” and “harmful” can be thin, and easy to cross.
The rise of viral wellness, and the mental health cost
We’ve all seen it:
30-second hacks promising emotional breakthroughs
Morning routines guaranteeing anxiety-free days
Diets, supplements, peptides and “miracle cures” for mood disorders
Yet many of these trends are created by people with no training, no ethical framework, and no accountability.
A recent investigation highlighted influencers promoting peptide injections and exaggerated health claims and thousands of people commenting “trying this tomorrow.”
As a counsellor, I find this deeply concerning. Because behind the aesthetic videos and soothing voices, there’s real potential for harm.
When people replace professional support with viral advice, where does that leave their long-term mental health?
AI and mental health: powerful… but misunderstood
In my 9–5 role, I’ve been lucky. I’ve received training and guidance on how to use AI responsibly. I’ve learned how to question outputs, check accuracy, and understand limitations.
But outside work, I see people asking AI for mental health diagnoses, trauma interpretations, emotional advice and then sharing the results as though they’re fact.
The truth is:
If I ask AI a question about mental health without understanding the topic… how can I judge whether the answer is safe?
AI can sound confident, even compassionate while being dangerously wrong. It may offer soothing language while reinforcing misconceptions. And its tone alone can make people believe it’s more accurate than it is.
This isn’t the fault of the user. It’s the nature of the tool.
But the mental health impact can be real.
“Forearmed is forewarned” and for mental health, that matters even more
I grew up hearing that phrase. In today’s digital world, it feels like a survival strategy.
Before I post or share something that looks like mental health guidance, I try to pause:
Do I know the environmental and ethical cost of the AI producing that content?
Have I checked whether the advice is grounded in psychological evidence?
Could sharing this unintentionally trigger someone? Mislead them? Give false hope?
Because mental health content isn’t neutral.
It can soothe, scare, uplift, confuse, or harm.
Sometimes I get it right.
Sometimes… I realise I’ve been too quick.
The real-world impact: misinformation hurts mental health
Research shows more than half of Brits have believed fake news online.
Other studies show social media is rated least trustworthy for accuracy, yet it’s where many are going for wellbeing guidance.
That combination, low trust, high usage, is a recipe for mental health confusion.
Think about:
A young person self-diagnosing from a TikTok clip
Someone with anxiety believing an influencer’s “quick fix”
AI-generated wellness trends creating pressure to be “better” every day
Viral trauma content making people feel broken or behind
These aren’t harmless trends.
These are psychological stressors.
So here’s the question I’m sitting with:
"Am I being a responsible participant in the mental health ecosystem I’m helping create?"
Social media rewards speed, not care.
AI rewards confidence, not clinical accuracy.
And all too often, I reward both with my attention.
So maybe the most important reflection I can make is this:
If I’m essentially a mental health messenger every time I post… shouldn’t I take on the responsibility of an editor, not just a broadcaster?
Pause.
Check.
Reflect.
Then share, or don’t.
For my wellbeing.
For other people’s wellbeing.
And for the digital mental health landscape we’re all shaping, whether we mean to or not.

Comments