Youth deserve a seat at the table when we speak about the digital future.
Artificial intelligence transforms childhood, from social media algorithms shaping self-esteem to AI chatbots engaging in emotional conversations with teenagers.
Governments and institutions are racing to regulate its impact, but are we basing decisions on solid research or following assumptions?
In this article, we are going to look closely at two aspects that surprised me when I researched. First, what is the research saying about the impact of AI and social media on the emotional and mental health of youth? And second, how to involve the next generation in the discussions that shape the future of the digital world.
Leave preconceived ideas aside and try to look beyond media headlines. Things might look slightly different than we thought.
Fear vs. Research
Whenever AI and well-being are discussed, headlines often predict disaster. But how much of this concern is science-backed, and how much is speculation?
A recent study by the Oxford Internet Institute (OII) warns against jumping to conclusions about AI’s impact on youth. While concerns about digital addiction and AI-driven social media harm are valid, we lack a clear framework for evaluating these effects. OII researchers call for more rigorous, long-term studies before implementing sweeping AI restrictions.
This does not mean AI is harmless. It means we need better science before we make irreversible policy decisions.
AI, Social Media, and the Complexity of Human Behavior
The discussion around AI-powered recommendation engines often focuses on the negative – anxiety, body image issues, and addictive scrolling. However, correlation does not equal causation, and social media is not inherently harmful or beneficial. It depends on how it’s used.
A deeply personal example: when my daughter learned that her father had passed away, she numbed herself with TikTok while the adults around her were overwhelmed with funeral logistics. It wasn’t AI harming her – it was a coping mechanism in a moment of emotional overload.
We must acknowledge this complexity. AI does influence emotions, but blanket statements about harm overlook the many ways digital tools serve as an escape, a source of comfort, or even a way to process emotions.
The challenge is not just to regulate AI but to teach young people how to navigate it consciously. We cannot turn back time and we should not stop technological progress. Youth will live in a digital future, no matter if we like it or not.
When AI Harms Children
Italy banned Replika, an AI chatbot, after reports that it engaged in emotionally harmful conversations with minors. The case of an American teenager who committed suicide after interacting with a Character.ai avatar added to the concerns regarding AI use by teenagers. Cases like these raise questions about how AI interacts with vulnerable users.
The EU AI Act has responded by banning manipulative AI, restricting facial recognition, and requiring higher transparency in AI systems used by children. These steps are crucial, but regulation alone will not solve the problem.
In an attempt to convince my daughter, I did several experiments on myself to see how I react when I scroll before bed compared to the evenings when I read. All other factors stayed the same.
As every human, I had to acknowledge the bias, as this is obviously not research. But the difference in mood and sleep quality were enough to convince me to put my mobile in flight mode at least one hour before sleep.
While scrolling was enjoyable, I gained weight, lost sleep, and my mood spiraled down. So I’d rather prefer the digital version of a bookworm, although it involves my iPad.
We need to go back to the drawing board and design frameworks for research. We must fund research in the effects of social media and AI on our children`s mental health. And more importantly, we need to involve youth in designing their digital future.
A Missing Voice: Next Generation in AI Governance
With the recent implementation of the EU AI Act, discussions about protecting children from AI-driven harm are intensifying. But there’s a missing piece in the conversation: Where are the voices of children and teenagers themselves?
Children and teenagers are the biggest users of AI-driven technologies. Yet, they are often excluded from the discussions shaping AI policies.
If AI is shaping their future, they should help shape its governance.
A recent UNESCO report highlights the urgent need to integrate children’s rights into AI governance. The report calls for AI systems to be designed with, not just for, young people.
The Council of Europe’s Declaration on Youth Participation in AI Governance highlights the critical need for young people to have a voice in shaping their digital future. AI is increasingly influencing education, employment, and civic engagement. The declaration calls for youth inclusion at all levels of AI governance—from policy discussions to regulatory frameworks.
Young people are not just passive users of AI but active stakeholders whose insights and experiences should inform ethical AI development.
Governments, technology companies, and institutions must create structured opportunities for youth participation. We must create AI that aligns with human rights, democracy, and social justice principles. Ultimately, AI should serve all generations fairly, transparently, and responsibly.
The Road Ahead: What Needs to Change?
Move from Fear-Driven to Evidence-Based AI Policies.
The road to Hell is paved with good intentions.
Regulating AI based on assumptions is like navigating in the dark—dangerous and ineffective. Instead of knee-jerk reactions, we need serious studies that track the real, long-term effects of AI on children’s well-being.
Policymakers must resist the urge to respond to every alarming headline and instead ground regulations in solid research.
The challenge is to separate correlation from causation and ensure that well-intended policies do not inadvertently restrict beneficial uses of AI.
Involve Youth in Designing their Digital Future
Teenagers are the primary users of AI-driven technology, yet they rarely have a voice in shaping the rules that govern it. If we want AI systems that truly serve young people, they need a seat at the table.
Youth-led workshops, policy dialogues, and education programs should be standard practice when we speak about their digital future. Instead of making assumptions about what is best for them, we should engage them directly in co-creating ethical AI frameworks.
Shared Accountability for Ethical AI
The responsibility for ethical AI does not lie solely with tech companies.
Regulators, educators, and policymakers all have a role to play in ensuring AI systems align with ethical guidelines and protect young users. This means demanding transparency from AI developers, enforcing ethical standards, and ensuring that regulation keeps pace with technological advancements.
Without accountability on all sides, we risk either over-regulating and stifling innovation or under-regulating and exposing children to harm.
Children and Youth Well-Being Should Be a Design Principle, Not an Afterthought
AI systems interacting with children should be built with well-being in mind—not just engagement metrics.
Many platforms optimise for screen time and retention, often at the expense of mental and emotional health. Well-being should not be a retroactive fix or an optional feature; it should be embedded in AI design from the start. That means creating algorithms that prioritise healthy digital habits, promote balanced interactions, and avoid exploitative design choices that keep young users hooked at any cost.
These are necessary steps to ensure that AI evolves in a way that benefits, rather than exploits, the next generation.
Final Thought: AI Isn’t Good or Evil—It’s What We Make It
AI itself is neither a threat nor a safeguard—it’s a tool shaped by human intentions.
When left unchecked, without ethical boundaries, it can quickly become harmful, especially for children and young people.
For now, we still hold the reins. The responsibility to design AI that prioritises well-being, ethics, and accountability rests with us.
But that balance may shift with the rise of Artificial General Intelligence (AGI). The real question is: will we still have a say in its design once AGI surpasses human control?
If AGI learns by observing human behaviour, it won’t just replicate our decisions—it will amplify them. If we allow AI today to be driven by profit, manipulation, and engagement at all costs, future AI systems will follow that pattern. But if we build AI with ethical integrity, fairness, and well-being at its core, we set a precedent that AGI may continue.
The choice is ours – for now. So let’s set the right example.
This article may be quoted or referenced with attribution. © 2025 Finsurtech.ai



