New research on Reddit creates methods to cut off hateful rhetoric at source
Three University of Iowa researchers have developed technology to determine risk factors of subreddits—chat boards on the social website—spreading hateful rhetoric, allowing moderators to ban them before they become hateful.
September 5, 2019
Several University of Iowa researchers are trying to combat hate speech as it continues to permeate online platform Reddit, which has been accused in recent years of promoting rhetoric deemed harmful to discourse.
UI researchers Hussam Habib, Maaz Bin Musa, and Rishab Nithyanand, have collaborated with Fareed Zaffar of the Lahore University of Management Sciences to develop a proactive-moderation strategy. This method allows administrators to moderate hate speech and undersupervised chatrooms on online platforms such as Reddit.
“Essentially, we’ve been studying online communities and how they evolve as well as how we can improve moderation in those communities,” Musa said.
Content on Reddit is somewhat moderated, the researchers said. Specific channels called subreddits have moderators who are often site users that the company does not employ. Reddit does have community guidelines, the researchers said, but moderators are often unable to effectively enforce them, leading to hateful rhetoric developing within the community.
In the academic paper “To Act or React?” the researchers said more than 138,000 active subreddits are currently online. According to the paper, Reddit has moved to ban discussion boards such as r/EuropeanNationalism, r/MillionDollarExtreme, and r/CringeAnarchy. Reddit quarantined r/The_Donald earlier this year. Users of such forums posted racist comments following mass shootings and other threats of violence toward certain underrepresented communities.
The researchers found that often a subreddit changes drastically between the time of its inception to when a moderator may take action.
“We looked at what a subreddit is at the time of its birth and how it evolves over time,” Musa said. “We captured two aspects of a subreddit: the content that it talks about, and the users that get involved in it. Subreddits do evolve; they do not tend towards stability.”
The research team found that there are trend predictors which can anticipate if a subreddit will turn hateful.
“We saw that more of the benign subreddits can tend to move towards hateful speech and bigotry,” Musa said. “They start as totally benign subreddits that the moderators say are fine, but after a period of time they start evolving towards the content and user-base that already hateful subreddits have.”
In response, the researchers used a machine-learning model to identify these predictors. The researchers intend for this program to be shared with moderators and Reddit administrators so they can eliminate the subreddit when it shows signs of trending toward hatred, but before it reaches that point.
“We give it a set of features to assess, like number of comments, number of posts, number of users of that subreddit that contribute to other hateful subreddits,” Habib said.
However, the researchers said banning a subreddit is not always as effective as users believe it to be.
“We found out that when a subreddit gets banned, the users of that subreddit will often just migrate to a different subreddit,” Habib said. “The language doesn’t actually change; it’s the same hateful rhetoric from the first subreddit on the new one.”
Users of Reddit are also familiar with this hateful content, even if they are not active followers of hateful boards, said UI junior Liv Harter, a frequent Reddit user.
“I just notice that people take it from zero to 100 really fast,” Harter said. “A lot of people bring up politics randomly on there. I saw some horrifically racist comments at the bottom of a picture of an African American father and daughter.”
The researchers noted that their work will cut off the problem of bigoted speech on Reddit at the source rather than banning hatred after the fact.
“Moderators and administration act after a subreddit becomes hateful,” Musa said. “Banning a subreddit after it becomes hateful is not as effective as mitigating it at its start. That is what proactive moderation would do.”
Editor’s Note: In a previous version of this story, the last name of Liv Harter was spelled incorrectly. The DI regrets this error.