In April 2025, Prince Harry and Meghan, the Duke and Duchess of Sussex, spotlighted the issue of youth online safety by unveiling the Lost Screen Memorial in New York City. This temporary installation, organized by their Archewell Foundation, featured 50 illuminated lightboxes shaped like smartphones. Each displayed the lock screen photo of a child whose life was cut short by some type of online harm (please visit the digital version here). These deeply personal images were provided by bereaved parents from the Parents’ Network, a community established by the Foundation to support families affected by social media-related tragedies and to channel their grief into advocacy and reform. I have had the privilege of getting to know some of the moms and dads of these families, in person or through virtual conversations, and have learned their stories. They simply want a better future for youth, and they do not want anyone to experience the pain they know.
The purpose of the Lock Screen Memorial, from my perspective, was to remind technology companies and lawmakers that child safety must be prioritized in substantive ways. Against this backdrop, I am convinced we are at an inflection point. For the last couple of years, I have been neck-deep in law and policy issues in this space, and wanted to take some time to discuss an important approach that continued to gain traction: approaching trust and safety online through a children’s rights framework. But what do we actually mean by “children’s rights”? It’s not just about letting kids do whatever they want online. As a parent and an educator, I don’t want that, and you likely do not either. Rather, it’s about recognizing that children are individuals with their own needs and entitlements, including the right to access information, to learn from online resources, and to express themselves freely. At the same time, these rights come with the understanding that the healthy development of children must take priority. That means children should have the right to feel safe online and to interact in digital spaces that are designed with their safety in mind.
It’s about recognizing that children are individuals with their own needs and entitlements, including the right to access information, to learn from online resources, and to express themselves freely. At the same time, these rights come with the understanding that the healthy development of children must take priority. That means children should have the right to feel safe online and to interact in digital spaces that are designed with their safety in mind.
Translating this into practice means building systems of protection that are informed by the best available research and that actually work to reduce online risks and harms. Ideally, these systems do not just prevent negative outcomes, but also actively support children’s mental health and well-being. And importantly, protections for kids should not be imposed in a vacuum. Rather, they need to be informed by the real experiences and perspectives of children themselves. When we listen to what children say about their online lives, we get a clearer picture of what works, what doesn’t work, and what they need to thrive online. This is why more platforms are starting to assess the impact of their products and policies on children’s rights as a way to build better, safer, and more empowering online experiences for young people (e.g., see the Wikimedia Foundation’s Human Rights Impact Assessment).
The Digital Rights of a Child That Deserve Attention
If you’re familiar with organizational or corporate security, you’ve likely encountered traditional risk-based assessments focused on mitigating specific dangers through alarms, surveillance, ID checks, and other physical or technical safeguards. These typically prioritize preventing legal liability, reputational harm, or data breaches. A Child Rights Impact Assessment (CRIA), however, operates differently: it starts from the premise that children are rights-holders entitled to protections and agency under international frameworks. While the United States signed the UN Convention on the Rights of the Child (UNCRC) in 1995, it remains the only UN member state not to ratify the treaty, meaning it isn’t legally bound by its provisions. Despite this, Trust and Safety professionals globally – including many in the U.S. – still consider principles from the UNCRC and its General Comment No. 25, which interprets children’s rights in digital contexts. These frameworks address critical issues such as those below, based on the CO:RE 4Cs framework:
1. Content Exposure Risks
Exposure to violent or sexual content
Disinformation and hate speech
Promotion of self-harm or suicide
Radicalization/indoctrination content
Age-inappropriate advertising
2. Interpersonal Interaction Risks
Cyberbullying and harassment
Sexual exploitation or grooming
Online predators
Gender-based violence
3. Behavioral and Developmental Risks
Compulsive use patterns
Loss of autonomy
Body dysmorphia
Imitation of risky or harmful behaviors
Sleep disruption
4. Data and Commercial Exploitation Risks
Privacy violations and data misuse
Profiling/predictive analytics on minors
Manipulative marketing and loot boxes
Commercialization of childhood data
Unfair terms in user agreements
5. Systemic and Structural Risks
Digital divide and access barriers
Algorithmic discrimination and bias
Over-censorship of legitimate expression
Inadequate accessibility for disabilities
Lack of child participation in design
6. Design and Platform Practice Risks
Deceptive interface patterns
Ineffective age verification systems
Poor parental control implementations
Real-time location sharing defaults
Ephemeral content risks
7. Well-being and Mental Health Risks
Anxiety
Depression
Eating disorders
Fear of missing out (FOMO)
Emotional desensitization to violence
8. Legal and Redress Gaps
Inaccessible reporting mechanisms
Slow content moderation response
Insufficient digital citizenship and digital literacy resources
Insufficient legal remedies for harms
Cross-jurisdictional enforcement challenges
Also, the principles in the UNCRC and General Comment No. 25 intersect with broader expectations for businesses. For all countries belonging to the United Nations (UN), the UN Guiding Principles on Business and Human Rights (UNGPs) and the Children’s Rights and Business Principles (CRBPs) establish a clear responsibility for all companies – including U.S.-based social media platforms – must identify and mitigate human rights impacts tied to their operations. The unique vulnerabilities of children make this duty particularly critical, as rights violations during childhood can have lifelong developmental consequences. CRIAs bridge these frameworks, and serve as practical tools to evaluate how the products, policies, and services of platforms align with both child rights protections and corporate human rights responsibilities.
How Online Platforms Can Conduct CRIAs
By conducting a CRIA, platforms can gauge the extent to which each identified child rights concern is adequately managed within their existing processes. Furthermore, these assessments facilitate the development of a more integrated and holistic strategy for safeguarding child rights throughout the organization’s structure and activities. Ultimately, a CRIA not only measures and highlights areas of concern but also provides a foundation for implementing robust child protection initiatives. Moreover, it reinforces a corporate ethos of responsibility towards children’s well-being across all organizational contexts. Some platforms are beginning to see the value in this (as explained in this paper from colleagues Sonia Livingstone and Kruakae Pothong), but it needs to become a industry-wide practice.
A CRIA not only measures and highlights areas of concern but also provides a foundation for implementing robust child protection initiatives. Moreover, it reinforces a corporate ethos of responsibility towards children’s well-being across all organizational contexts.
There are a number of templates online that can help guide the implementation of CRIAs. I appreciate one from the European Network of Ombudspersons for Children and have customized it for social media and gaming platforms below with 13 specific steps across 6 stages.
A Template for Platform CRIAs
Stage 1. Screening and Scoping
Explain which platform feature, algorithm, or service update requires assessment
Explain which child rights and principles could be impacted
Explain which age groups (e.g., under 13 vs. 13-17) and marginalized communities need special consideration
Stage 2. Evidence Collection
Explain which platform metrics (abuse reports, usage patterns) reveal child-specific risks
Explain how input will be gathered
Anonymous surveys with parental consent protocols
Focus groups using age-appropriate platforms
Co-design workshops with youth advisory boards
Other methodologies
Explain the independent research that validates your child safety claims, and detail any limitations
Stage 3. Child Participatory Design
Explain how consultation methods comply with regulatory requirements for:
Informed assent
Accessibility accommodations
Privacy protections
Parental involvement
Inclusivity
Risk Prevention
Stage 4. Impact Analysis
Explain the impact on children’s rights and health
Child autonomy online
Child safety, both for general and for high-priority issues
Child development (physiological, emotional, and psychological health)
Explain what the design safeguards are intended to prevent
Commercial exploitation through targeted ads
Unintended data collection from minors
Exclusion of low-bandwidth users
Other concerns
Stage 5. Recommendations and Implementation
Explain how the specific changes address identified risks
Prioritization of child well-being over engagement metrics
Balance of parental controls with capacities of platform
Consideration of algorithmic bias
Stage 6. Transparency and Ongoing Review
Explain the child-friendly formats that will explain assessment outcomes
Explain tracking of longitudinal impacts through usage metrics
Detail how assessments will be updated with emerging developments and technologies
Of course, feel free to modify the CRIA template above as you see fit; you know your company better than I do.
First Understand the Risks and Harms
At the start of a CRIA, platforms must first understand the risks, harms, interventions, laws, and research at the intersection of youth online safety and child rights. Of course, the digital landscape, legislative backdrop, and the context of youth development are continually changing. As such, care must be taken to stay connected and informed with the latest developments in these areas. Platform trust and safety personnel must review all relevant findings to determine which risks and harms are most significant in severity, scope of impact across the user base, potential for and velocity of growth or virality, frequency of occurrence, and potential for long-term psychological or social consequences.
The severity of risks and harms should be assessed based on their potential impact on individual users and the broader community. The scope of impact across the user base helps identify issues affecting many users or specific vulnerable groups. The potential for virality is crucial in understanding how quickly a risk or harm can spread and escalate on the platform. Assessing the frequency of occurrence helps trust and safety personnel allocate resources accordingly. Finally, the potential for long-term psychological or social consequences is a critical consideration, given that youth professionals are increasingly examining interpersonal harm (offline and online) through a trauma-informed lens (check out our new paper!). That is, they are rightfully viewing and taking into account the lasting negative effects of online risks and harms on young users.
Additionally, platforms have data points related to safety-related actions by users, such as blocking, muting, reporting content or users, and privacy setting choices. Analyzing these data should provide insights into the types and rates of harms reported to the platform over time, and how they are distributed across different demographic groupings. This analysis can help identify trends in online risks and challenges faced by youth users, allowing platforms to develop more targeted and effective safety measures. The data can also shed light on the usage rates of existing safety controls offered by the platform, which may suggest areas for improvement or the development of new safety features.
Center the Voice of Youth!
As referenced earlier, a truly effective CRIA must prioritize the perspectives and experiences of young people themselves. Platforms should move beyond adult assumptions and aggregate data by directly engaging with youth through mechanisms such as youth advisory councils, participatory workshops, and regular consultations. This engagement should be ongoing and meaningful, ensuring that young people have a genuine voice in shaping the policies and practices that affect them. Recent research in Europe shows that children value online safety tools that empower them and respect their agency, rather than simply restricting access or imposing controls. By listening to youth, platforms can design features and safeguards that are both effective and respectful of children’s rights to participation, privacy, and self-expression.
To accurately capture the diversity of youth experiences, platforms should draw on both quantitative data (e.g., usage patterns, incident reports, and behavioral analytics) as well as qualitative insights (e.g., interviews, focus groups, and open consultations with young users). Of course, it is essential to ensure that the voices of underrepresented groups – including children with disabilities, those from minority backgrounds, and those facing socio-economic barriers – are actively sought out and included.
Furthermore, platforms should provide transparent feedback to youth participants, demonstrating how their input has influenced decision-making and product design. This not only builds a better relationship between the platform and this important segment of the userbase, but also fosters a sense of ownership and responsibility among young users.
There is a perception among some that social media companies consider children’s rights as an afterthought, or a box to tick mark. What we are seeing now is that society increasingly expects companies to demonstrate a genuine, sustained commitment to making children’s well-being a core, foundational element of their business practices. Even if this is being done to some extent, a CRIA can make those efforts much more intentional and visible. Not only will this better safeguard and support our youth, but it may also help foster a more collaborative relationship between platforms, governments, and regulators. Publishing CRIA findings and involving all stakeholders – including children, parents, educators, and civil society – in reviewing and refining policies should build trust ensure that decisions are informed by those most affected.
If platforms use this process to proactively identify and address risks and harms, they can 1) genuinely improve the well-being of young users and set themselves apart as platforms where kids and teens can have safer, more positive experiences online, and 2) steer clear of public relations crises and costly regulatory penalties in the future. My hope is that CRIAs aren’t simply an exercise in compliance, but rather a powerful tool for rebuilding trust with parents, educators, and lawmakers – all of whom are increasingly looking for real evidence that platforms are prioritizing the needs and rights of young people.
Image sources: Gabe Pierce, Katerina Holmes, and Yan Krukau
The post Child Rights Impact Assessments (CRIAs) to Support Youth Online appeared first on Cyberbullying Research Center.