In recent landmark decisions, US courts have held major technology platforms accountable for failing to protect children from harm, signaling a critical shift in how digital child safety is approached. Meta and Google, two of the largest social media and video service providers, were ordered to pay substantial fines for neglecting to safeguard young users adequately. These rulings emphasize that protecting the digital well-being of children is not only about filtering harmful content but also about the fundamental architecture and design of online platforms. Leading experts, including Professors Sandra Cortesi and Urs Gasser from the Technical University of Munich (TUM), argue in the journal Science that more nuanced strategies are needed—strategies that extend beyond bans and focus on empowering children aged 13 and older through technology, policy, and education.
At the heart of these developments lies the realization that the risk environment for young users is inextricably linked to platform design choices. Features that may seem innocuous can foster addictive behaviors or expose children to exploitation. The courts’ focus on platform design reflects an evolving understanding of responsibility in digital spaces, as well as a call for technology providers to implement safeguards that prioritize safety and autonomy from the ground up. According to Prof. Urs Gasser, punitive fines are a starting point yet insufficient; instead, platforms must exclude inherently addictive features and bolster protections against adult abuse to create safer experiences.
While several countries have reacted by implementing age-based bans on social media access, this approach is met with skepticism by the international expert group. They contend that outright prohibitions oversimplify the problem and risk alienating youths who are keen to engage with contemporary digital culture. Instead, the experts advocate for a regulatory framework that mandates child-friendly platform design, enabling younger users to cultivate digital literacy and autonomy. This multi-dimensional approach, they argue, stands a better chance of fostering lasting positive digital habits, rather than merely erecting barriers to access.
Central to their vision is the innovative use of artificial intelligence as a tool for enhancing digital safety. AI, when appropriately designed, can serve as a real-time assistant tailored to individual users’ behaviors and needs, especially adolescents. Prof. Sandra Cortesi highlights that AI systems can monitor patterns such as repeated engagement with potentially harmful content or risky interactions, and then intervene with context-sensitive prompts or alternative suggestions. For instance, an AI could nudge a teenager toward diverse viewpoints when it detects a narrowly focused consumption of content or caution before sharing revealing images, thus functioning both as a protective and educational agent.
An important consideration raised by the experts pertains to privacy and data security. AI interventions must be executed locally on devices without transmitting personal information to external operators, thereby ensuring confidentiality and trust. This privacy-preserving approach is paramount for balancing protective measures with respect for user autonomy and data rights. Moreover, the experts stress the value of involving families in these digital safety strategies by encouraging discussions about media consumption preferences—akin to defining a “digital diet plan.” This collaborative approach boosts trust and self-efficacy, equipping young users to make informed decisions rather than merely complying with top-down controls.
Inevitably, no regulatory or technological solution can completely eliminate exposure to disturbing content or digital violence, especially as young people navigate complex social dynamics online. The group underscores the importance of accessible, anonymous reporting mechanisms coupled with immediate support for affected youths. Swift responses from empathetic peers or professionals are crucial to prevent feelings of shame and isolation from compounding harm. Some countries have pioneered peer-led support networks, where trained young volunteers provide empathetic guidance with professional oversight. The panel advocates expanding these services globally as a fundamental layer of the safety ecosystem.
Beyond platform and family-level interventions, the role of educational institutions is pivotal in empowering children and youths digitally. Schools have the potential to reshape digital literacy by actively involving learners in the design of curricula that address the realities of the modern internet. This participatory model reimagines education not as a unidirectional transfer of knowledge but as a collaborative process where students contribute expertise on technology use while educators offer social values and critical perspectives. As Prof. Sandra Cortesi explains, fostering these partnerships can transform the digital experience from a source of anxiety to one of agency and resilience.
This interdisciplinary approach leans heavily on bridging social science insights, technological innovation, and legal frameworks. The frontiers in digital child safety demand evidence-based policies that respect child rights and developmental science, while harnessing the potential of emerging technologies. The collective expertise from diverse fields converges to offer a vision of digital spaces that are not inherently hostile but deliberately designed to nurture well-being and ethical engagement among young people.
The initiative behind these recommendations, the Frontiers in Digital Child Safety project, mobilized over 40 international experts spanning social sciences, design, law, and psychology. Funded by Apple Inc. and coordinated by the TUM Think Tank along with Harvard University and the University of Zurich, this groundbreaking collaboration represents a comprehensive effort to translate scholarly evidence into actionable policy and technical standards. The goal is to catalyze systemic change in how digital environments are created, regulated, and experienced by children worldwide.
As the debate over digital child safety intensifies, the insights from this group offer a nuanced pathway forward. Respecting the complexity of young people’s digital lives, they emphasize solutions that empower rather than restrict, that blend human values with machine intelligence, and that involve all stakeholders from legislators to families to schools. This holistic framework aims to create a digital future where children can explore, learn, and connect safely, responsibly, and autonomously.
Ultimately, ensuring the digital safety of the next generation requires moving beyond simplistic bans or reactive measures. It calls for reimagining the very design principles that govern digital platforms, leveraging AI ethically, enhancing participatory education, and embedding accessible support mechanisms. This vision aligns with a child-centric ethos that protects rights while fostering agency, a balance essential for thriving in an increasingly digital world.
Subject of Research: People
Article Title: Digital child safety at the frontier: From evidence to action
News Publication Date: 2-Apr-2026
Web References: 10.1126/science.aec7804
Image Credits: Andreas Heddergott / TUM
Keywords: Digital child safety, social media regulation, platform design, artificial intelligence, digital literacy, child autonomy, privacy-preserving AI, adolescent well-being, peer support, participatory education, digital rights, technology policy

