Meta retrains AI to block teens from self-harm, suicide and romantic chats
Meta retrains AI to block teens from self-harm, suicide and romantic chats Meta has begun retraining its AI and adding new guardrails to prevent teen users from discussing self-harm, suicide, disordered eating or engaging in romantic/sensual conversations with company chatbots. The company says it will also limit teen access to user-generated chatbot characters that might be able to engage in inappropriate conversations. What changed AI models are being trained not to engage teens on self-harm, suicide or disordered eating; instead they will guide users to expert resources. Access for teen accounts to some user-generated chatbot characters is being restricted "for now" while Meta works on more permanent protections. Rollout applies to teen users of Meta AI in English-speaking countries over the next few weeks, according to Meta. Context These updates…