Research demonstrates that social media offers young people significant benefits while simultaneously presenting notable risks. These services enable vital social connections, foster learning and skill development, provide safe spaces for identity exploration (especially crucial for underrepresented youth), and create opportunities for civic engagement and activism.
However, these advantages are accompanied by a variety of potential harms, including exposure to inappropriate content (violence, sexual material, hate speech), cyberbullying, harassment, and child sexual exploitation and abuse. The landscape of risks continues to evolve, with new challenges emerging from immersive digital environments and AI-generated content. To address these complex challenges, we need comprehensive protective measures that integrate both governmental oversight and platform-level safeguards to effectively mitigate the emotional, psychological, and behavioral risks young people encounter online.
We need comprehensive protective measures that integrate both governmental oversight and platform-level safeguards to effectively mitigate the emotional, psychological, and behavioral risks young people encounter online.
European Union Legislative Efforts to Support Youth
Given this need, Farah Lalani and I have analyzed current approaches to regulating youth online safety in Europe, and have uncovered several critical shortcomings that hamper effective protection. First, the regulatory landscape suffers from a predominantly restrictive approach that is not fully supported by research evidence, with only 8 out of 29 countries (27 EU Member States, Iceland, and Norway) conducting systematic monitoring of their digital policies. Second, there is a concerning lack of standardization across platforms, particularly in content moderation practices and in defining age-appropriate experiences, which leads to inconsistent safety measures.
Third, implementation remains fragmented across EU member states, with several countries still lacking properly empowered Digital Services Coordinators as part of the mandate of DSA enforcement as of December 2024. The regulations also struggle to address the reality of access circumvention by youth and fail to balance privacy concerns with safety measures, particularly in age verification systems. Finally, vague legislative language creates interpretation challenges for platforms, potentially leading to overly zealous content removal that might infringe on the digital rights of young people. These issues collectively point to a need for more evidence-based, standardized, and clearly defined regulatory frameworks that can effectively protect young users while preserving their digital rights and opportunities for positive online engagement.
The SAFEST Model
As such, we have felt the need to devise a new framework to serve as a template for youth online safety regulation across the EU. It consists of seven critical components that must remain top-of-mind when considering social media platform operations and their impact on youth development, well-being, and digital rights. This model considers both the risks and opportunities of online engagement and recognizes that effective youth safety measures must go well beyond restrictive policies and supervisory approaches. Regulators therefore can set guidance on controllable determinants of youth online safety and work toward key pillars of digital engagement via seven essential components:
Safety and Protection from Harms: Children must be protected from online harms, including harassment, exploitation, and abuse
Autonomy and Choice: Children should be respected, heard, and empowered to make informed choices
Free Expression & Information Exchange: Children should be able to freely express themselves and participate meaningfully online
Evidence-based Practices: Children need research-informed and data-driven products, policies, and protections to serve and support them
Security and Privacy: Children’s data and online activities must be protected through robust safeguards
Transparency: Children and caregivers deserve clear information about how platforms affect their rights
The SAFEST Model considers both the risks and opportunities of online engagement and recognizes that effective youth safety measures must go well beyond restrictive policies and supervisory approaches.
Our Brand New Report for Safer Internet Day
We believe the SAFEST model is the most optimal method to balance protection with empowerment to ensure that youth can safely participate in digital spaces without being restricted in ways that hamper their development and skill-building. To commemorate and participate in Safer Internet Day 2025, we have just released our full report in which we tackle a number of relevant historical, current, and future-focused considerations:
Section 1 examines the complex role of social media in youth lives, exploring both its benefits for connection and development, as well as potential risks like harassment and harmful content. We analyze key factors driving social media use, including device ubiquity, psychological triggers, and neurological responses while presenting current research on mental health impacts and the effectiveness of parental controls.
Section 2 provides a comprehensive overview of the European regulatory landscape affecting young people online. We detail major legislation addressing illegal content, transparency requirements, privacy protections, and efforts to combat child exploitation. The section also covers emerging regulations around AI, automation, and age verification systems.
Section 3 analyzes current deficiencies in online safety regulation, and highlights how restrictive approaches often lack research backing, standardization issues across platforms, and problems with fragmented implementation across Europe. Here, we also explore challenges with monitoring effectiveness, access circumvention, and balancing privacy with safety.
Section 4 elaborates upon our SAFEST framework for youth online safety regulation by presenting concrete recommendations for regulators. Below is a summary of this critical section; please consult the full report for more details).
Regulators should establish a standardized, industry-wide approach for age verification at the device level in order to streamline app installation and usage by youth, rather than leaving verification fragmented across individual sites and services. This leads to multiple points of failure given that multiple social media companies have to store and manage this protected personal data across their servers. If a vulnerability is exploited, only a single user’s data would then be compromised, instead of multiple centralized databases that contain the personal information of thousands or millions of users.
Regulators must also provide a clear definition of “age-appropriate content” as it relates to content suitability for different age ranges. They can gain insights from the historical approaches of the video game industry (e.g. Entertainment Software Rating Board (ESRB)) and film industry (e.g. British Board of Film Classification’s (BBFC) and the Classification and Ratings Administration (CARA), an independent division of the Motion Picture Association (MPA) of America). These organizations have established detailed frameworks for evaluating content elements such as violence, sexual themes, language, drug use, and other sensitive material across different age categories. In addition, for areas that are crucial to youth safety, including content moderation and platform design, regulators should mandate that companies follow industry standards to drive consistency and effectiveness, such as those on age appropriate design offered by the Institute of Electric and Electronic Engineers (IEEE) and the CEN-CENELEC (CEN – European Committee for Standardization / Comité Européen de Normalisation; CENELEC – European Committee for Electrotechnical Standardization / Comité Européen de Normalisation Électrotechnique Workshop).
It is also essential that regulators provide incentives for positive change rather than solely utilize a punitive, fines-based approach to compliance. This can be modelled after the National Highway Traffic Safety Administration’s New Car Assessment Program (NCAP) and Euro NCAP in the automobile industry, where technological innovations in safety now provide a competitive advantage and can be a brand differentiator. Relatedly, the safety standards in place in the European toy market can inform an analogous model for digital products and services as it relates to formal comprehensive testing, inspection, and certification. As an example, achieving a four-star rating or silver-level compliance could inspire a company to redouble its youth safety efforts over the subsequent year as they aim for five stars or gold-level recognition. This approach could foster healthy platform competition to improve baseline safety standards and drive industry-wide advancements.
Regulators must also advocate for, and help support, new legislation that can address novel instantiations of criminal behavior fostered and facilitated by new technological advances. They must also demand improvements in operational protocols by law enforcement and related investigative authorities so that online misuse or abuse prompts a systematic and coordinated response, even across jurisdictions, instead of one that is ad hoc, fragmented, and suboptimal. If this does not happen promptly and efficiently, these legislative gaps will continue to provide opportunities for offenders to evade prosecution and exploit differences between legal systems as they continue to victimize other users.
Finally, regulators must provide social media platforms with clear guidance regarding identified gaps and present concrete remediation plans that incorporate specific practices, considering those previously reviewed and any new components that emerge over time. Leaving platforms to interpret vague regulations independently risks incomplete, inconsistent, or ineffective implementation of safety measures. Such ambiguity could lead platforms to either adopt a minimalist approach to compliance or implement overly broad content moderation policies that potentially infringe on fundamental rights, including children’s rights as protected under the UNCRC. Models to emulate can be found with the Federal Aviation Administration’s power to mandate safety fixes from aircraft manufacturers before planes can return to service, and Ofcom enforcing strict broadcasting standards and telecommunications regulations in the UK.
Please see the full report for an expanded understanding of our recommendations and reach out if you want to discuss their substance or implementation. Overall, we see great value in strong regulatory oversight and recognize that regulators and platforms must cooperate to achieve optimal youth safety online. We hope our report encourages a balanced, research-informed, and contextually-aware approach that protects youth from risks and harms while preserving their digital rights and supporting their development. This must be the standard for responsible innovation, both now and in the years to come.
Methodology
This research employed a comprehensive desk research methodology to analyze the current landscape of youth online safety across multiple domains. The investigation began with an extensive review of legislative frameworks in the EU and the US for comparative purposes. The academic literature review encompassed peer-reviewed research on youth development, mental health, and the documented impacts of social media on adolescent well-being. Particular attention was given to studies examining online risk exposure and cyber victimization patterns. The research also evaluated social media industry best practices related to content moderation strategies, platform-specific safety measures, the implementation of age-appropriate design principles, AI-driven decision-making, and more.
We also recognized the critical importance of hearing directly from young people about their lived experiences, perspectives, and concerns in social media environments. To gather these insights, we partnered with ThinkYoung, a Brussels headquartered think-tank, research institute and non-for-profit organization focusing on young people. ThinkYoung conducts studies, surveys, focus groups and data analysis on Gen Y, Gen Z and Gen Alpha. They facilitated three focus group sessions in November 2024 to gather insights from young people across different age groups and twelve countries. These insights informed our understanding of their needs and the recommendations we present for both regulators and platforms.
Disclaimer
This project received funding from Meta. All analysis and research were conducted independently, and all findings and conclusions are solely those of the authors.
The post The SAFEST Model: Streamlining EU Youth Online Safety Legislation appeared first on Cyberbullying Research Center.