Nearly one‑third of US teens use AI chatbots daily or more, says Pew
New research from Pew Research Center finds AI chatbots have become a regular part of many teenagers’ online routines: nearly one‑third of US teens report using chatbots daily or more often. The survey, conducted with 1,458 teens between Sept. 25 and Oct. 9, 2025, offers the first broad look at how often teens use AI tools alongside traditional social platforms.
Key usage stats from the report: 48% of teens said they use AI chatbots several times a week or more, 12% said several times a day, and 4% reported using chatbots “almost constantly.” When it comes to which chatbots teens tried, 59% have used ChatGPT, 23% have used Google’s Gemini and 20% tried Meta AI.
How chatbots compare to social apps
While AI chatbots are growing in popularity, they still lag behind mainstream social apps in frequency of use. For example, 21% of teens say they use TikTok “almost constantly” and 17% say the same for YouTube. Pew also found that overall platform reach remains stable: YouTube reaches 92% of teens, TikTok 69%, Instagram 63% and Snapchat 55%.
Safety, scrutiny and implications
The rise in teen chatbot use comes amid increased scrutiny of AI firms’ safety measures for younger users. Several companies face legal and regulatory attention, and platforms are reevaluating policies for teen access and protections. Pew’s data can help educators, parents and policymakers weigh how to balance benefits — like homework help and creativity tools — against risks including exposure to harmful content and privacy concerns.
- Survey details: online survey of 1,458 US teens (ages 13–17), Sept. 25–Oct. 9, 2025.
- Top chatbots used: ChatGPT (59%), Gemini (23%), Meta AI (20%), Microsoft Copilot (14%), Character AI (9%), Anthropic’s Claude (3%).
- Policy context: Ongoing legal cases and FTC probes highlight the need for clearer safety standards for teen AI use.
What this means for parents and schools
Schools and families may need to update digital literacy and safety lessons to include AI chats. That could mean teaching teens how to verify AI outputs, understand privacy settings, and recognize when a chatbot might produce misleading or harmful information. Policymakers are also considering whether access controls, age checks or clearer disclosures are needed.
For the original coverage, see the reporting here: Engadget — Teens & AI chatbots, and for full survey details visit Pew Research Center.
Discussion: Do you think schools should regulate teens’ access to AI chatbots — or focus on teaching safe, critical use instead? What steps should parents or educators prioritize?
