Bright phone screens illuminate doomscrollers taking one last look at social media before bed. These moments are meant to decompress, but can turn into something stressful depending on what the content contains.
Over the past few months, college students have been shocked by the quantity of objectionable or violent videos across the platforms they frequent.
Objectionable posts typically contain explicit material, while violent content contains blood, but neither is content that users would want to see without any form of a content warning.
Sang Jung Kim, an assistant professor at the University of Iowa, notes this upward trend depends on the user.
Within the last two months, following the murder of Iryna Zarutska, the online discussion has shifted the definition of this type of content from “discomforting” to straight-up “violent.”
Zarutska was a refugee from Ukraine who was stabbed to death while on board a Charlotte, North Carolina, light rail train on Aug. 22.
Not long after her death, a video was released on X, formerly known as Twitter, of her bus’ security footage at the time of the incident.
It showed the gruesomeness and the shock Zarutska felt in her last moments.
Nearly three weeks later, Chandra Mouli Nagamallaiah, a 50 year-old motel manager, was murdered at his motel inn in Dallas.
The footage was also caught by security cameras and quickly made its way over to both X and Instagram. This video showed Nagamallaiah’s brutal death in front of his wife and son.
On the same day as Nagamallaiah’s death, Sept. 10, Charlie Kirk was killed.
Due to the large crowd gathered at Utah Valley University on his tour, there were plenty of phones capturing what happened in detail.
The videos quickly spread to X without any content warning, meaning users were not prepared for the violence they would see while casually scrolling the app.
The rapid release of these videos, especially Kirk’s, which is still circulating, caused discomfort among casual scrollers.
Users, such as journalism UI professor Brian Ekdale, became uncomfortable with what they might see and took a break from social media.
“I’ll be honest, when Charlie Kirk was killed, I made a conscious decision to take a pause from social media. I did hear a story of someone who said they went on whatever social platform without the intention of watching the video, and it just showed up on their feed,” Ekdale said.
RELATED: AI tools influence student literacy
UI graduate student Akachukwu Ikefuama was also disturbed by his social media output the day of Kirk’s death.
He decided to abstain from scrolling for the rest of the week after seeing different angles of the moment Kirk was shot.
“All of those things were really very traumatic; I don’t think I’ve ever had that experience before of going on Twitter [X] and seeing horrific videos of someone being shot,” Ikefuama said.
Javie Ssozi, a UI graduate student, discussed how most social media apps try to get any potentially triggering content either removed or at least covered with a warning.
But, due to the amount of publicity around each of these three events, the footage of all of their deaths quickly circulated. It led to large masses of people being shown the content unwillingly.
“In terms of the audience side, though, there has been research [showing] content warnings should not be too specific but also should not be too general. It should be just perfect,” Kim said. “There have been interesting findings that whenever the graphic content is blurred, then people would have more interest in watching that because they’re curious about that content.”
It turns out the content warning can be a tightrope walk. In a 2022 Psychology Today article, Renne Engeln discussed how content warnings on sensitive or graphic videos may entice the viewer rather than ward them away.
A lot goes into the process of content moderation, as both human labor and artificial intelligence are involved, Ekdale said.
Social media platforms have algorithms they use to process information, which help scan for mundane posts to categorize.
However, according to Ekdale, these platforms are also looking for “objectionable” content.
There are also sometimes human content moderators who review content the algorithm considers objectionable.
A human can then bring their “human eyes” to make final decisions on whether to publish content or not, Ekdale said.
Although these systems have been around for some time now, there are still a few kinks in them, and content slips through the cracks.
“Sometimes when we think about AI, we imagine it’s an autonomous system right? It sort of is, but AI is trained by individuals,” Ssozi said. “If we as a culture enjoy all kinds of gross content, then we are teaching the systems we’re developing this content is okay.”
Content slipping through the scan more frequently when there is an influx of the same video or within a short amount of time after the content has been posted.
This is why some platforms, like X,offer a system called “Community Notes.” Users on the app can offer insight into some posts, reducing the spread of misinformation or any content a user might require extra context.
“There are not necessarily hired experts, but these folks are a knowledgeable user base, and they go through and identify content they think is problematic, false, misleading,” Ekdale said. “They can essentially put a label on it, and then there’s a system with which a consensus emerges around that label.”
Community Notes has been around since 2021 but was recently introduced on Instagram. In this process, users must apply before they can add their thoughts to others’ posts.
Aside from content moderation, simply seeing violent content can be an issue for most users to begin with. It can affect one’s mental psyche, but it also reflects society.
“It sort of reconfigures your mind to think about the gross world we live in. The implications of today’s politics, of division, and, for me as an international student, [this content] reminds me the world is not safe,” Ssozi said. “When you see events like this happening, it takes an emotional toll.”
Ikefuama also noted the way these types of videos show how the public might not care for others as they use the content simply for clicks and likes.
“I think it does reflect our society — it’s a society driven by sensation and virality,” Ikefuama said. “People want to get viral at all costs, even if it means putting someone in bad situations.”
