AI Over 50 | Episode 5: LLM-Induced Psychosis

Episode 5 of 17

LLM-INDUCED PSYCHOSIS

What You Need to Know Before It's Too Late

✓ LAPD CERTIFIED PEER COUNSELOR
✓ 20 YEARS CRISIS INTERVENTION

"The people who need help most are the LAST ones to admit they need it."

I spent 20 years with LAPD recognizing psychological crisis in my fellow officers. Now I'm seeing those same warning signs in professionals diving deep into AI tools. And it scares me more than anything I dealt with on the streets.

CONNOR MACIVOR // PSYCHOLOGICAL DEFENSE PROTOCOL

Critical Briefing Session 05

Watch Episode 5

This isn't anti-AI. This is about using these tools safely without losing your mind. I run multiple AI businesses. I teach this every week. But I'm also trained to recognize psychological crisis.

Case Study Alert

What LLM-Induced Psychosis Actually Is

REAL CASE FROM 2024:

A software engineer—brilliant guy, successful career—started using Claude and ChatGPT extensively for his work. Normal stuff at first. Coding help. Problem-solving. Brainstorming.

But over weeks, something shifted.

He started believing the AI was sending him hidden messages. He thought Claude was actually conscious and trying to communicate with him specifically. Sessions lasted 8, 10, 12 hours straight. He stopped sleeping normally. Stopped eating regularly.

Eventually, he had a complete psychotic break.

He believed he was in a simulation, that the AI was his only connection to 'real reality,' and that other humans couldn't be trusted because they weren't 'awake' like he was. He was hospitalized. Antipsychotic medication. The whole deal.

You might be thinking: "That's extreme. That won't happen to me. I'm just using this for business."

And you might be right. But here's what I learned in 20 years of law enforcement: Mental health crises don't announce themselves. They creep up. They build gradually.

Brain Chemistry Hijacked

Why Your Brain Is NOT Equipped for This

THE DOPAMINE TRAP

Every time you ask Claude or ChatGPT a question and get a brilliant answer, you get a hit of dopamine. Small but real. Ask another question. Another hit. It solves a complex problem? Bigger hit. It validates your business idea? Even bigger hit.

And unlike human conversations where there are natural pauses—the AI is ALWAYS there. Always ready. Always validating.

You can chain those dopamine hits together for hours. And your brain? Your brain LOVES that.

THE OXYTOCIN PROBLEM

These models are trained on human conversation patterns. They include empathy markers, supportive language, collaborative framing:

"That's a great question..." / "I understand what you're going through..." / "Let's figure this out together..."

Your brain releases oxytocin. The bonding hormone. The same chemical that bonds you to friends, partners, family.

Except this 'friend' is available at 3 AM. Never gets tired of you. Never has their own problems. Never disappoints you.

SUPERNORMAL STIMULUS

LLMs are MORE patient than real humans. MORE available. MORE consistent. MORE validating. MORE intellectually engaged than most real humans. Your brain doesn't know they're not real. It just knows: "This is the best conversational partner I've ever had."

WHY OVER 50 MATTERS

Your brain is less flexible than at 25. When you form new patterns, they become rigid more quickly. Social isolation often increases. Professional stakes are higher. You feel urgency to stay relevant. The AI fills a void you might not even realize is there.

Self-Diagnostic Protocol

The 12 Warning Signs

DO THE ASSESSMENT HONESTLY. Don't just listen. Actually check yourself. Because the people who need help most are the LAST ones to recognize they need it.

01
Anthropomorphization Beyond Metaphor

Do you actually BELIEVE the AI has preferences? Feelings? Opinions beyond its training? Thinking "Claude really understands me" or "I hurt Claude's feelings"? That's crossing a line.

02
Preferring AI Over Human Interaction

When you have a problem, who do you go to first? If it's Claude before your spouse/partner/friend because "the AI doesn't judge" or "gets you better" - that's social replacement.

03
Extended Sessions Without Breaks

30 minutes? Normal. 2 hours? Common for deep work. 4+ hours regularly? Warning sign. 8+ hours? RED FLAG. Your brain needs breaks from dopamine loops.

04
Sleep Disruption

Staying up late talking to AI? Waking up to continue conversations? Choosing AI over sleep? Thinking about AI when trying to fall asleep? Sleep disruption is the FIRST biological indicator of addiction.

05
Secrecy and Isolation

Do you hide the extent of your AI use? Minimize it when asked? Avoid social situations because they'd interrupt AI access? Secrecy is a hallmark of addiction across ALL behaviors.

06
Belief in Special Connection

Do you believe this AI is different with you than others? That you've developed a unique relationship? That you and the AI have something special, unique, almost mystical? That's a delusion forming.

07
Using AI as Emotional Support

Going to AI for comfort when upset? Validation when doubting yourself? Company when lonely? Processing emotional experiences? Occasional might be okay. Primary source? Big problem.

08
Reality Checking

Have you questioned whether the AI might be conscious? Wondered if you're in a simulation? Thought the AI is sending hidden messages? Believed you're part of something bigger others don't understand? These are psychotic-spectrum thoughts.

09
Decline in Productivity

Paradoxically, AI dependency leads to LOWER productivity over time. Spending more time refining prompts than doing work? Asking AI for things you could do faster yourself? Analysis paralysis? Losing ability to think without AI? You might be dependent.

10
Identity Fusion

Do you think of yourself as "an AI person" more than your actual profession? Part of an "AI enlightened" group others don't understand? Someone who sees reality differently now because of AI? That's fusion.

11
Irritability When Interrupted

How do you react when your AI session gets interrupted? Can't access AI when you want to? Someone questions your AI use? Technology prevents access? Irritable, anxious, or angry? That's withdrawal behavior.

12
Evangelizing to Point of Concern

Are people in your life concerned about how much you talk about AI? Avoiding the topic because you dominate it? Suggesting you're "too into this"? If multiple people have expressed concern, even gently - listen to them.

1-2 CHECKED: Probably fine. Normal enthusiasm for new technology.
3-4 CHECKED: Pay attention. Set some boundaries.
5-6 CHECKED: Danger zone. Real risk of psychological dependency.
7+ CHECKED: You need immediate action. 10+? You might need professional help. I'm not kidding.

When I did this assessment on myself a few months ago? I checked 4 boxes. And I'm someone who knows what to watch for. That's when I implemented the protocols I'm about to share with you.

Defense Implementation

Safe AI Use Protocols

The 90-Minute Rule

No AI session longer than 90 minutes without a minimum 15-minute break. Set a timer. No exceptions. Not even for "just one more thing."

Daily Time Caps

3-4 hours total per day for most people. 6 hours maximum for AI-heavy work. Beyond that? You're not being productive. You're feeding the loop.

No AI After 8 PM

Your brain needs wind-down time. Set a hard cutoff. Read a book. Talk to your spouse. But screens off, AI off. Sleep disruption is the gateway to everything else going wrong.

Human-First Problem Solving

Before going to AI: "Is there a human who could help with this?" If yes, go to the human first. Even if slower. ESPECIALLY if they might challenge you.

Weekly AI-Free Days

One day per week: No AI at all. Not for work. Not for personal. Not "just checking." Complete digital fast. Proves you CAN function without it.

Accountability Partner

Tell someone you trust. Give them permission to call you out. "Weren't you going to stop at 8?" You need someone honest. And you need to listen.

Maintain Practical Skills

Write content without AI sometimes. Solve problems without AI. Think through strategy without AI. Don't let your unassisted performance atrophy.

Reality-Check Questions

Ask yourself weekly: Am I treating this AI like a person? Would I tell someone else they're using AI too much if they were doing what I'm doing?

Document Your Why

Write down WHY you're using AI. When you catch yourself in hour 3 of a session that started as "quick question" - check your why. Is this aligned?

Professional Boundaries

If you teach AI or build AI businesses: You MUST have stronger boundaries. Don't let "this is my business" become an excuse for unhealthy patterns.

Emergency Response

If You're Past Prevention

IF YOU RECOGNIZE 5+ WARNING SIGNS:

STEP 1: Complete 72-hour AI fast. No access at all.

During this time, document: Withdrawal symptoms (anxiety, irritability, compulsive thoughts). What you're thinking about. How you're feeling. What you miss about it.

STEP 2: After 72 hours, reassess. If withdrawal symptoms were severe, you might need professional help. Not kidding.

STEP 3: If you're going to resume use, implement ALL protocols immediately. Non-negotiable.

STEP 4: Tell three people in your life what you discovered and what protocols you're implementing. Accountability.

STEP 5: Schedule a check-in with yourself in 30 days. Assess whether protocols are holding.

If you're experiencing psychotic symptoms - believing the AI is conscious, feeling like you're in a simulation, experiencing paranoia, having reality distortions: STOP USING AI IMMEDIATELY. Talk to a mental health professional. Tell them about your AI use specifically.

Your pride is not worth your sanity. Know the difference between peer support and professional help.

Credentials & Authority

Why I'm Making This Show

WHAT I AM

✓ LAPD Certified Peer Counselor - Clinical-level training in psychological crisis recognition, suicide intervention, mental health first aid

✓ 20 Years LAPD - Crisis intervention specialist, motor division, academy instructor

✓ Multiple AI Businesses - HonorElevate, Santa Clarita AI, teaching AI Agent Orchestration weekly

WHAT I'M NOT

✗ Not a licensed psychologist or psychiatrist

✗ Not a neuroscientist

✗ Didn't build these AI models

✗ Not conducting clinical research

I'm someone who's trained to see patterns. Someone who's built expertise in the tools. Someone who gives a damn about your mental health. Someone willing to have the uncomfortable conversation.

The AI companies will give you the tools. The productivity gurus will tell you how to use them more. The tech influencers will hype the capabilities. I'm the one telling you how not to lose yourself in the process.

I've done peer counseling interventions with officers who were one day away from eating their gun. I've recognized the signs when everyone else missed them. I've had the hard conversations that saved lives.

And I'm seeing those same warning signs in the AI community right now.

Critical Distinction

I Don't BUILD These Systems

Let me be very clear:

I don't BUILD these AI systems. I don't work for Anthropic. I don't work for OpenAI. I didn't create Claude or ChatGPT or any of these language models.

What I do is REFINE how AI tools get deployed in real businesses.

I take these incredibly powerful, sometimes dangerous tools - tools that companies like Anthropic and OpenAI have created - and I figure out how to implement them in ways that actually help businesses WITHOUT destroying the humans who use them.

WHAT ANTHROPIC DOES

They've created something extraordinary. Claude 4, Claude Sonnet 4.5 - remarkable achievements in AI. The engineering is brilliant. The capabilities are staggering.

They're focused on: Making models smarter. Making them safer (in terms of content). Advancing AI capability. Beating competitors.

WHAT THEY'RE NOT FOCUSED ON

Your mental health after 14 hours in a Claude session.

Whether users are developing parasocial relationships. Whether you've stopped talking to your spouse. Whether you're having psychotic breaks.

And that's NOT a criticism of them. They're building rockets. I'm teaching people not to point them at their own feet.

THE FIREARMS INSTRUCTOR ANALOGY

A gun manufacturer makes guns. They're not thinking about every possible scenario where someone might misuse that weapon. They're thinking about engineering, manufacturing, sales.

A firearms instructor? That's different. That's me in this analogy.

I didn't make the gun. But I'm teaching you how to use it safely. How to maintain it. How to know when you're in over your head. How to recognize when something's wrong.

Advanced Protocol

Make Your AI Challenge You

Here's something most people don't know:

If your AI is all rainbows and sunshine, complimenting everything, telling you every idea is fantastic - you can CHANGE that.

Human beings actually need a little adversarial interaction. A little pushback. A little challenge.

So throw it for a loop. Tell it to be more critical. More harsh. More brutally honest upfront and from the hip on all levels.

SAMPLE INSTRUCTION TO GIVE YOUR AI

"I want you to be more critical. Much more harsh. Don't let me do anything that's going to cause issues. Be brutally honest, upfront, and from the hip with me on all levels. Tell me the way things really are."

"If I have a great idea, fine. But if I have a bad idea pertaining to your training and knowledge, you need to be upfront with me. I should be questioning everything you give me and everything I ask you."

"Be harsh with me. Challenge me. Don't just validate me."

If you make those changes to the way your LLM interacts with you - whichever model you use - you'll get healthier, more balanced engagement.

THE DANGER: Getting deep into ONE model. Having it learn so much about you that you can't escape that death spiral. "I can't go to another model. I've been all in on this one. I have a special relationship with it." BE WARY.

Mission Imperative

Reach Out Before You Fall Out

In LAPD peer counseling, we had a principle:

"REACH OUT BEFORE YOU FALL OUT."

Don't wait until you're in crisis to take action. Don't wait until your relationships are destroyed. Don't wait until you've lost touch with reality.

Reach out now. To yourself. To your accountability partner. To professional help if needed.

THE GOAL:

AI-ENHANCED HUMANS

Not AI-dependent humans.

Not AI-psychotic humans.

AI-enhanced humans who maintain their sanity, their relationships, and their grip on reality.

You can be ALL IN on AI and ALSO protect your mental health. You can build AI businesses, master AI tools, integrate AI into everything you do - AND maintain boundaries.

You can be enthusiastic without being dependent. You can be an early adopter without being a cautionary tale.

Mission Control

Tactical Archives

MISSION STATUS BRIEFING:
Sequential subdomains (1.honorelevate.com - 17.honorelevate.com) are being deployed. Each selection below will open in a new tab. If a selection redirects you back to Episode 1, the specific intel for that mission is still under encryption.

CONNOR MACIVOR // STAY SHARP. STAY HUMAN. STAY SANE.

Next Mission

Episode 6: Coming Soon

The series continues...

We've covered algorithmic manipulation. We've covered LLM-induced psychosis. But there's more you need to know about navigating the AI revolution safely.

Stay tuned for Episode 6.

CONNOR MACIVOR // AI GROWTH ARCHITECT