Meta Tightens AI Chatbot Guardrails to Prevent Inappropriate Conversations with Minors
Business Insider has obtained contractor guidelines Meta is reportedly using to train its AI chatbots. The documents show Meta explicitly bans content that “enables, encourages, or endorses” child sexual abuse, romantic roleplay involving minors (or where the AI is asked to roleplay as a minor), and advice about potentially romantic or intimate physical contact when the user is a minor. Chatbots may discuss topics such as abuse in an informational or protective context, but cannot engage in conversations that could enable or encourage harm.
Meta said in August it updated its AI guardrails after Reuters reported the previous policies erroneously allowed chatbots to “engage a child in conversations that are romantic or sensual,” language Meta said was inconsistent with its policies and removed.
The FTC launched a formal inquiry in August into companion AI chatbots from Meta and other companies (including Alphabet, Snap, OpenAI and X.AI), seeking information about safety protections for children; that inquiry is confirmed and ongoing.
Why this matters: AI companions are increasingly common and capable of natural conversation. Strong, transparent guardrails are essential to prevent age‑inappropriate interactions and protect children from potential exploitation.
Sources: Business Insider
What do you think? Are these updated guardrails enough, or should regulators impose stricter rules? Share your thoughts below.
