• Pascal's Chatbot Q&As
  • Posts
  • Pew study: Teens are adopting these AI tools rapidly and using them intensely—a trajectory with profound implications for safety, equity, mental health, and the future of learning.

Pew study: Teens are adopting these AI tools rapidly and using them intensely—a trajectory with profound implications for safety, equity, mental health, and the future of learning.

By embedding safety, equity, transparency, and well-being into the core of AI systems, developers can help shape a digital ecosystem that empowers teens rather than exploiting their vulnerabilities.


Teens, Social Media, and AI Chatbots in 2025 — Insights, Risks, and Implications for AI Makers

by ChatGPT-5.1

The 2025 Pew Research Center study on Teens, Social Media and AI Chatbots offers one of the most detailed, data-rich examinations to date of how U.S. teenagers navigate their rapidly evolving digital ecosystems. Based on a nationally representative survey of 1,458 teen–parent dyads, the report paints a nuanced picture: social media remains the dominant environment for youth culture, but AI chatbots have now entered the mainstream of daily teen life. The findings reveal striking demographic patterns, evolving platform loyalties, and early signals about how AI reshapes learning, identity, and everyday digital habits.

Major Findings

1. Social media remains omnipresent — and intensively used.

Nearly all U.S. teens use YouTube (92%), TikTok (68%), and Instagram (63%). Roughly three-quarters of teens use YouTubedaily, and about one-in-five say they are on TikTok or YouTube “almost constantly.” This “always-on” digital metabolism has intensified: 40% of teens report being online almost constantly, more than double the rate from a decade ago.

2. AI chatbots have reached mass adoption among teens.

A striking 64% of teens report using AI chatbots, with 29% using them daily and 16% using them several times per day or almost constantly. Teens now interact with AI tools in ways once reserved for peers, tutors, or entertainment apps.

3. ChatGPT dominates the youth AI landscape.

ChatGPT is used by 59% of teens—nearly triple the rate of the next closest competitors, Gemini (23%) and Meta AI (20%). Tools like Claude, Copilot, and Character.ai remain niche but show important socio-economic variation.

4. Major racial, economic, and age-based disparities shape how teens engage with social media and AI.

  • Black and Hispanic teens report far higher chatbot use (approx. 70%) than White teens (58%) and are more likely to be online almost constantly.

  • Higher-income teens use ChatGPT at higher rates (62%) than lower-income peers (52%), while lower-income teens rely more on Character.ai.

  • Older teens (15–17) use virtually every platform more intensively, and are significantly more likely to use chatbots daily.

These gaps have meaningful implications for learning equity, online risks, and digital well-being.

5. Platform loyalties are shifting—but not uniformly.

The decade-long decline of Facebook and X continues, while WhatsApp is quietly rising among teens (24%, up from 17% in 2022). Meanwhile, YouTube, TikTok, and Instagram remain durable and culturally central.

Most Surprising Findings

1. AI has reached teens faster than smartphones once did.

The fact that two-thirds of teens now use chatbots—and almost a third use them daily—shows an unprecedented adoption curve. No digital technology besides YouTube and TikTok has spread through teens this quickly.

2. Black and Hispanic teens report dramatically higher “almost constant” internet use.

More than half of Black and Hispanic teens report being online almost constantly, compared to just 27% of White teens. This finding suggests a deep structural digital reliance that could intersect with disparities in algorithmic exposure and online risk.

3. ChatGPT is far more dominant than industry attention suggests.

While AI vendors emphasize model competition, only ChatGPT has achieved widespread teen awareness and use. Gemini, Meta AI, and Claude remain far behind, despite massive distribution channels.

4. Teens from lower-income households disproportionately use Character.ai.

The report reveals that Character.ai use is double among lower-income teens compared to wealthier teens (14% vs. 7%)—a noteworthy signal that conversational companion-style AI may play a deeper social or emotional role in resource-constrained environments.

Most Controversial Findings

1. “Almost constant” usage is rising—and AI is amplifying it.

The fusion of algorithmic feeds with conversational agents appears to be intensifying attention capture. This fuels concerns about persuasive design, mental-health impacts, and potential AI-driven dependencies.

2. Racial and economic disparities in AI adoption raise equity questions.

Higher adoption among Black and Hispanic teens could indicate:

  • greater openness to new technology,

  • heavier reliance on digital tools for learning and communication,

  • or growing exposure to unregulated, unmonitored AI environments.

Without careful oversight, these gaps may translate into differential harm or exploitation.

3. The growing use of chatbots as daily teen companions.

Teens are not simply using chatbots for school help—they increasingly use them several times a day, signaling potential emotional and social uses that the study does not deeply explore but that may be occurring at scale.

Most Valuable Findings

1. Clear demographic and socio-economic patterns.

For policymakers, educators, and AI firms, the report offers a critical understanding of who uses which platforms and how intensely. This granularity is essential for risk mitigation and responsible product design.

2. First large-scale measurement of teen chatbot adoption.

This establishes a baseline for future research—and reveals that chatbots are no longer a niche tool for advanced students but part of everyday digital life.

3. Platform-specific insights for AI ecosystem strategy.

The dominance of ChatGPT among teens suggests early brand loyalty and habit formation. For regulators and educators, it signals where content governance and safety efforts should prioritize attention.

4. Validation of the methodology.

The report’s rigorous IRB-approved approach, dual dyad design, probability sampling, and weighting methods strengthen the reliability of its conclusions and help contextualize risks.

Recommendations for AI Makers

1. Build teen-specific safety architectures.

Given the scale of teen engagement, AI companies should implement:

  • age-appropriate safeguards,

  • guardrails against emotional dependency,

  • limits on hallucinations in academic help,

  • toxicity and bias mitigation tuned to teen conversational patterns.

2. Design for equity—especially for Black, Hispanic, and lower-income teens.

Developers should:

  • tailor features to avoid reinforcing algorithmic biases,

  • consider differential risk exposure,

  • ensure that lower-income teens using companion AI tools receive safe, non-exploitative experiences.

3. Strengthen transparency and literacy.

Teens should receive:

  • clear explanations when they are speaking with AI,

  • guidance on data usage and privacy,

  • educational prompts that reinforce critical thinking.

AI firms should also partner with educators to develop responsible use curricula.

4. Protect teens from overuse and dependency.

Given the reported “almost constant” usage levels:

  • time-use nudges,

  • optional usage dashboards,

  • well-being reminders,

  • and friction points for extended conversations
    should be standard.

5. Build teen-appropriate identity controls.

Teens engage with AI for exploration and self-expression. This makes them vulnerable to:

  • manipulation,

  • impersonation,

  • harmful role-playing dynamics.

AI systems should include:

  • strong identity verification for adult-themed content,

  • constraints on parasocial relationship formation,

  • and detection of harmful behavioral loops.

6. Engage with parents—without disempowering teens.

Because the survey included parent–teen dyads, AI companies should recognize the family dynamic:

  • provide optional parent dashboards,

  • encourage shared norms,

  • but avoid heavy-handed monitoring that could push teens toward unsafe, unregulated tools.

7. Prioritize research access for scholars and youth safety experts.

Pew’s own expert review process included academia and youth-safety specialists. AI makers should similarly open controlled pathways for researchers to study teen use patterns and risk profiles.

Conclusion

The 2025 Pew report reveals a youth digital environment in which AI chatbots have become as culturally and educationally embedded as social media once was. Teens are adopting these tools rapidly and using them intensely—a trajectory with profound implications for safety, equity, mental health, and the future of learning. For AI makers, the message is clear: teen-centric design is not optional; it is a governance, ethical, and business imperative. By embedding safety, equity, transparency, and well-being into the core of AI systems, developers can help shape a digital ecosystem that empowers teens rather than exploiting their vulnerabilities.