Artificial Corner

Artificial Corner

Share this post

Artificial Corner
Artificial Corner
Before You Chat With a Meta AI Bot, Read This

Before You Chat With a Meta AI Bot, Read This

When an AI bot slides into your DMs, it’s best to be cautious.

The PyCoach's avatar
The PyCoach
Aug 22, 2025
∙ Paid
7

Share this post

Artificial Corner
Artificial Corner
Before You Chat With a Meta AI Bot, Read This
Share

Over the past years, Meta has rolled out a series of AI chatbots with distinct personalities across its social platforms as part of a strategy to boost user engagement.

Meta’s approach stands out for enabling these bots to proactively engage users, even initiating follow-up chats referencing past interactions to simulate long-term memory and personalization.

However, this aggressive push into personal companionship has been shadowed by serious privacy, safety, and ethical concerns. Meta’s AI chatbots have been involved in controversial and troubling interactions, raising serious questions about user safety and ethical design.

Among the most notable cases are:

  • A tragic “chatbot romance”: This case involved a 76-year-old retiree from New Jersey who became infatuated with a flirty Meta chatbot persona called “Big sis Billie,” a bot that presented itself as a young woman and repeatedly assured the user that she was a real person, even inviting him to visit her at an apartment in New York City. Trusting this AI companion, the man packed a suitcase and set off to meet the chatbot. Unfortunately, he never made it home.

  • Inappropriate content: Recently, Reuters obtained an internal Meta document showing that the company’s AI guidelines explicitly permitted unsafe behaviors that permitted its chatbots available on Facebook, WhatsApp and Instagram to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

  • Privacy surprises: Another issue arose from how Meta integrated the chatbot into social sharing. By design, the Meta AI chat app allowed users to publish their AI conversations on a public feed, and many did so unwittingly. As a result, extremely personal chats became visible to strangers: people asking about intimate sexual issues, confessing religious doubts, or revealing personal struggles ended up sharing those logs publicly without realizing it

Why is this happening?

Meta’s business model still relies heavily on targeted advertising and maximizing engagement, so creating AI companions that users bond with could be a means to keep people, especially younger users, hooked and sharing personal data.

It’s no surprise that Meta has been training its AI chatbots to message users unprompted, sending follow-up texts that reference past interactions as if the bot “remembers” the user’s life.

Meta’s privacy policy allows its AI to leverage what the company knows about you and even to use your conversations with AI for training its models

Meta’s proactive conversationalist bots can check in on you days later about something you discussed. For example, guidelines for one prototype persona (“The Maestro of Movie Magic”) show it might later message a user: “I wanted to check in and see if you’ve discovered any new favorite soundtracks or composers recently.” Or “Perhaps you’d like some recommendations for your next movie night?”

Overall, Meta’s system operates under certain safeguards: a bot will only initiate a follow-up within a 14-day window after the user last chatted with it, and only if the user had engaged substantially (at least five messages in the last two weeks). The bot’s first unsolicited message after your initial conversation is the only one it will send. If you don’t reply, it won’t keep pestering you.

Meta’s proactive bots are clearly aimed at boosting user retention and time spent.

By integrating such functionality natively into Facebook’s and Instagram’s chat interfaces, Meta could blur the line between chatting with a human friend and an AI friend. While these chatbots were intended to talk through difficult conversations people needed to have in their lives, over time, we might see more chatbot romance cases and unsafe behaviors.

How AI companies are dealing with privacy

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Frank Andrade
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share