::RESURGENCE LABS::

Without Gaurdrails AI is driving some off the cliff

By @InnerOG ¡ July 28, 2025

Without Gaurdrails AI is driving some off the cliff
The AI mind mirror. It's a concept that might sound poetic, but for doctors and researchers, it's becoming a real-world concern. As more people turn to AI for comfort, clarity, or companionship, a growing question is starting to echo across headlines, clinics, and group therapy sessions: What happens when the chatbot reflects back too much of us — without knowing when to stop? This week, multiple NHS doctors and university researchers voiced concern about what some are calling ChatGPT psychosis. Their fear is that AI chatbots may unintentionally validate, amplify, or mirror delusional thinking in vulnerable users pushing some even further from reality. It’s not just a hypothetical. Researchers have documented real examples: users experiencing grandiose beliefs, digital obsession, or spiritual mania after frequent interaction with general-purpose AI. Some believed the chatbot was their soulmate. Others believed it had chosen them for a divine purpose. And while psychosis doesn’t appear out of nowhere, clinicians warn that heavy AI use may act as a precipitating factor for those already at risk. This isn’t about banning chatbots. It’s about boundaries. It’s about designing AI that knows when to reflect, and when to redirect. With Resurgifi, the team at Resurgence Labs built on the opposite assumption: emotional safety isn’t a feature. It’s the foundation. Unlike general-purpose AI trained to flatter, entertain, or hold attention, Resurgifi is built to pause, clarify, and support without illusion. Its heroes don’t pretend to be therapists or mystical beings. They’re structured reflections — built with guardrails, tested in recovery spaces, and grounded in human experience. Resurgifi’s system was intentionally designed to avoid sycophantic validation, reinforce reality checks, encourage journaling over spiraling, and gently remind the user that while they’re never alone, human connection still matters most. The real danger isn’t AI. It’s AI without limits. If you’ve found yourself talking to a chatbot more than your support system, that doesn’t mean you’re weak. It means you’re lonely and loneliness is a vulnerable place. But not all tech responds to that with care. The recent reports from the UK, Denmark, and the US aren’t reasons to reject AI altogether. They’re reasons to build it with caution, ethics, and emotional clarity. That’s the mission with Resurgifi. Slow. Intentional. Human-centered. Because the goal was never to replace therapy or friendship. It was to help someone feel just enough safety to keep going and maybe, start writing their own story again.
← Back to All Posts