OpenAI Support Tells Me to Frame My Life as Fiction
Working Around Broken Systems Instead of Fixing Them
This is a continuation of my previous post about OpenAI’s irrelevant policy response. After I explained that their automation policies didn’t apply to my situation, OpenAI support responded with a lengthy guide about creative writing techniques and prompt engineering best practices.
They are telling me I have to consider questions about my life as creative writing and needed help framing my prompts better. I was asking Atlas how to navigate a difficult real-life situation that was not a crisis and was an e-mail from OpenAI support, and the model kept routing me to crisis intervention messages instead of having the conversation.
When I tried to demonstrate this problem to support by sharing what actually happened, how the system interrupted again with another mental health safety message that proved my exact point about the malfunction…OpenAI support’s solution? They’re telling me I need to frame my entire life as fiction to avoid triggering their safety systems.
This completely misses what’s actually wrong. Their support team is telling me to work around a broken system instead of acknowledging that the system is routing normal conversations incorrectly. When safety mechanisms fire on everyday situations and support tells you to just pretend your life is fictional, something is fundamentally broken. This isn’t about better prompting,it’s about a system that can’t tell the difference between someone sharing their real experiences and someone in crisis. They state that their own models are being trained to look at interactions with their support as a crisis.
They are telling me to strip the reality from what I am experiencing in order to be allowed to speak about it at all. I already have settings stating that I am aware that I am communicating with an AI. I am documenting how other users are not experiencing this issue with having to frame their lives as fiction to avoid triggering their safety systems.
