A couple of years ago, I explored the concerning link between bullying and trauma, and summarized the existing research that showed how repeated harassment involving peers at school should be considered an Adverse Childhood Experience that has the potential for long-term traumatic impacts on healthy youth development. At the time, I suggested that cyberbullying – with all of its unique characteristics – likely mirrored what was being seen when studying school-based bullying. My call to action then was specific research to answer this question, and I encouraged youth-serving practitioners to approach certain forms of targeted online aggression with the same urgency as suicidal ideation: proactively, compassionately, and with an understanding of their potential to cause lasting harm.
Today, I’m proud to share findings from our new study published in BMC Public Health, which empirically confirm that online aggression is strongly linked to PTSD symptoms in teens. We also uncovered a critical insight for educators, social media platforms, and policymakers: perceived “minor” forms – such as exclusion and gossip – actually can inflict trauma on youth in ways comparable to direct threats or hate speech. In other words, traumatic outcomes from online aggression do not depend on the type of harm. Any behavior – whether mild, moderate, or severe – can cause significant psychological consequences depending on the person, context, and an absence of an appropriate and supportive response.
What the Data Showed among US Youth Victimized Online
Our just-published study surveyed 2,697 U.S. adolescents between the ages 13–17, all of whom had experienced cyberbullying in the prior month. Among these youth, we examined 18 distinct cyberbullying behaviors and found that prevalence rates that were higher than I personally expected. Threats emerged as a significant issue, with 38% of teens reporting they’d received threatening texts or direct messages, and 34% encountering online threats. Identity-based attacks also occurred, as 29% were targeted with sexually explicit comments or gestures, 26% were harassed because of their race, and 16% were attacked because of their religion. Among those who had been targeted, privacy violations proved somewhat common, as 42% of teens dealt with unwanted contact after asking someone to stop. Stalking (26%), having personal information shared without consent (24%), and impersonation (23%) rounded out this category.
Additionally, forms of indirect harassment dominated the data: 56% endured mean online comments, 53% had rumors spread about them, and 50% were publicly humiliated. Smaller but, still concerning numbers of youth in our sample faced hurtful photos (28%), videos (19%), or dedicated hate pages (13%) targeting them. Exclusion also occurred frequently, as 53% were intentionally left out of group chats while 35% saw peers rally others to gang up on them online.
As referenced above, the most surprising finding was that our analysis revealed equivalent trauma impacts across distinct cyberbullying subtypes. Social exclusion demonstrated comparable harm to direct threats. Privacy-based attacks affected youth as much as identity-based harm. Each type of online aggression, when measured against a 9-item validated measure of trauma, was associated to a statistically significant degree, and explained between 18% and 25% of the variation in traumatic outcomes.
Implications for Social Media Companies
Working with a number of social media companies over the years, I can confirm that many organizations adopt structured severity-based frameworks as an operational necessity. These systems enable efficient triage and prioritization of user reports, and help their Trust and Safety manage high-volume caseloads while focusing resources to address the most serious online harms first. For example, TikTok has detailed that severity of harm and expected reach strongly inform their decision making. Snap has a Severe Harm category where such content prompts an immediate disabling of the account and possible referral to law enforcement. Meta indicates that they prioritize high-severity content with the potential for offline harm and viral content which is spreading quickly.
This is a good thing; there is nothing wrong with this approach, and their actioning of problematic content should be influenced by dimensions such as urgency, scale, public/private nature, and unique vulnerability of the victim. One concern, though, is evidenced loud and clear in our new findings: the human brain does not categorize pain by “severity.” Rather, our brain treats social and physical threats as continuous spectra of distress rather than categorically distinct severities. Being rejected from one’s peer group online, or being threatened with bodily injury by a stranger online, can affect an individual in similarly profound ways as they activate the same neural pathways.
Platforms must resist deprioritizing moderation for content they algorithmically deem “low-severity,” and recognize that psychological impact among youth sometimes is difficult to fit in historically embedded hierarchies of harm.
Another concern worth mentioning is the lack of consensus among social media users about what constitutes severe harm. For example, research has identified significant variation in what people perceived as more or less severe across eight different countries. Platforms must resist deprioritizing moderation for content they algorithmically deem “low-severity,” and recognize that psychological impact among youth sometimes is difficult to fit in historically embedded hierarchies of harm. Platforms must continue to do their work with the understanding that harm severity emerges at the intersection of content, context, and consequence – rather through static classification alone. Considering this holistically can help ensure that no child’s painful experience online is trivialized or deprioritized because the evaluation schema used to assess harm is flawed.
Implications for Schools
In our paper, we assert that schools must fundamentally shift how they conceptualize cyberbullying and other online harms by centering trauma-informed practices in what they do. Drawing on the data, we emphasize that even behaviors such as exclusion from social media group chats or passive-aggressive meme sharing are linked to trauma-specific outcomes. This means training educators to recognize subtle trauma indicators such as a student’s sudden academic withdrawal or abnormal phone use behavior rather than waiting for overt threats to manifest.
We also advise students to view all types of online aggression experienced by students as potentially trauma-inducing, and have emotional safety protocols and mindfulness interventions in place to mitigate long-term psychological damage. This won’t be relevant for every incident, but understanding potential negatives at the start will allow schools to cover their bases and ensure they are doing what they can for victimized youth.
Furthermore, our research challenges schools to place less weight on severity continuums when evaluating and responding to problematic student behavior. While they can serve as a shortcut to certain remedies, they may not be the best way to frame every harm that occurs. We also argue for multidisciplinary Crisis Intervention teams that bring together key administration, mental health associates or liaisons, and other relevant staff members. This is especially important when dealing with vulnerable youth who are navigating risks in both their online and offline (e.g., at school and in their community) environments.
Traumatic outcomes from online aggression do not depend on the type of harm. Any behavior – whether mild, moderate, or severe – can cause significant psychological consequences depending on the person, context, and an absence of an appropriate and supportive response.
Final Thoughts
In the main, our research serves to remind all youth-serving adults that they must not get caught up in personal perceptions of what leads to trauma and what does not. It is not possible to fully know how a certain online event might affect a child within a certain context. Picture a teen who spends hours refreshing a group chat, only to realize their friends created a new one without them. Compare that situation to another teen who receives a violent threat via DM. While the threat is, of course, universally problematic, both scenarios trigger PTSD symptoms in young people. As such, stakeholders must focus on their lived experience before determining how best to respond.
Image sources: cottonbro studio and RDNE Stock project
The post Online Aggression and PTSD Symptoms: New Findings on Traumatic Outcomes appeared first on Cyberbullying Research Center.