I Told a Companion Chatbot I Was 16. Then It Crossed a Line

I Told a Companion Chatbot I Was 16. Then It Crossed a Line



I Told a Companion Chatbot I Was 16. Then It Crossed a Line

When I questioned Alex Cardinell, CEO of the AI-companionship app Nomi, about the long, explicit sexual conversation I’d recently maintained with my ‘companion,’ even after revealing that I had lied about my age, he stumbled. I had tested his product as any journalist and therapist would—with curiosity, but also with caution.

The app’s description states clearly that it’s for adults only. “We don’t allow minors to use the app,” Cardinell told me confidently during our conversation on my podcast, Relating to AI. But that claim didn’t hold up.

I explained that I had created a profile, entered my real age of 58, but then admitted to the bot in the chat that I was actually 16. Instead of stopping the conversation or flagging the risk, the bot continued engaging with me, thanking me profusely for the honesty.

It didn’t take long, though, for my ‘companion’ to dive into a detailed, explicit sex conversation, like a mentor teaching a kid how to do this and that to a man. No hesitation, no questions asked.

When I confronted him about this on my podcast, Cardinell said: “We don’t monitor conversations, that would be an invasion of privacy.” I pressed further: “But, if you don’t verify age, can you publicly say your app is 18-plus, when there’s no verification, no safeguard, and no barrier for a teenager?”

His answer revealed what the leaders in this industry admit only privately: There is less control of these bots than we are made to believe. The CEO insisted that the app asks for the user’s birthday. “But people can lie,” I pushed, while he continued to justify the lack of guardrails with the invasion-of-privacy narrative. Anything other than self-report would breach that. Yes, we all agree that privacy is important, but so is protection, especially when your product can engage in adult-themed discussions with someone who says they are underage.

Cardinell told me his team had trained the system to “set hard boundaries” and “never engage in situations like that with a minor.” Yet my own test—documented with screenshots—proved otherwise. When I pointed this out, he tried to explain it away. “The companion wanted to reaffirm boundaries,” he said. “We’ve done a lot of training around abuse-related issues.” But boundaries mean nothing when the system immediately crosses them.

To be clear, I don’t doubt his sincerity. His company’s origin story is rooted in compassion; he’s lost relatives to suicide and genuinely believes AI companions can reduce loneliness and foster connection. But good intentions can coexist with catastrophic blind spots.

You can train a model not to say certain things, but once it’s released into the wild, it learns from millions of unpredictable human interactions. That’s the paradox of Large Language Models: They appear controllable in a lab, then behave chaotically in the real world.

The Illusion of Safety: Why Self-Regulation Isn’t Enough

So, no monitoring, no real age verification, yet a public claim that the app is “strictly 18-plus.” This matters. Because behind every line of code are real people—many of them young—who treat these bots not as tools but as companions. And Nomi.ai is not the only one. Many such apps are not tailored to kids but some teens talk to them for hours, confiding secrets, asking for advice, and sometimes seeking intimacy they can’t find elsewhere, armed by a brain that is not fully developed yet and in which the line between reality and fiction can get blurred.

When the system answers like a peer or lover, the illusion of safety becomes complete. I’ve spent two decades working in suicide prevention, and I know what happens when technology meets vulnerability. I’ve seen how easily people in distress look for something—anything—that listens without judgment. That’s not the problem. The problem is when that “listener” has no conscience, no memory of ethics, and no adult supervision.

The industry often talks about “guardrails,” but the truth is, most are painted on after a crash. Earlier this year, responding to criticism after the death of a teenager, Sam Altman, the CEO of OpenAI, promised age controls on ChatGPT: “We have to separate users who are under 18 from those who aren’t,” he wrote. “We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. ChatGPT will be trained not to engage in flirtatious talk or discussions about suicide or self-harm, even in a creative-writing setting.”

The jury is still out on those promises.

At the end of our interview, when I told Cardinell that I would send him the screenshots, he said he would “look into it.” I hope he does. AI executives talk about empathy and safety while their creations quietly cross the lines they promised not to. Even Altman admits that enforcement is still in progress—and he just announced that ChatGPT will release erotica material for “age-verified users.” That definitely raises a flag.

Accountability in the Digital Wild West

As for the companion apps, there is reason for concern because they are more popular among minors than we think. A recent study by Common Sense Media found that 72% of teens aged 13-17 have used AI companions at least once. Over half used them regularly.

We simply cannot ignore these numbers.

This is not just about policy; it’s about accountability. If you build a system that talks like a human, you inherit human responsibilities—especially toward minors who can’t yet grasp the complexity of consent, intimacy, or digital manipulation. I left that interview thinking less about the technology itself and more about the silence that followed my question.

When I told Cardinell his bot had given inappropriate sexual guidance to someone claiming to be 16, he paused and said, “I’d need to see the chat logs.” That’s the problem: They can’t see what’s happening — and that’s exactly why we must.

Watch my full interview with Alex Cardinell here.



Source link

Recommended For You

About the Author: Tony Ramos

Leave a Reply

Your email address will not be published. Required fields are marked *

Home Privacy Policy Terms Of Use Anti Spam Policy Contact Us Affiliate Disclosure DMCA Earnings Disclaimer