Last week, Microsoft released a Twitter artificial-intelligence robot that was designed to interact with other Twitter users. The AI bot, called “Tay,” was supposed to have the personality of a teenage girl — essentially, the mindset of the average millennial, including the use of emojis.
Tay was able to learn about “her” surroundings from the interactions “she” had with fellow Twitter users. For example, Twitter users could tag Tay and ask her questions, say such things as “repeat after me” or fill her in on current events.
Tay was introduced March 23, and within 24 hours, the project by a few teams at Microsoft that had been intended to be a fun example of artificial intelligence’s capabilities turned ugly. Because of the interactions that Twitter users had with Tay and her capabilities, Tay became extremely hateful. As the Washington Post put it, “To many people’s horror, Tay soon became a Holocaust denier, a genocide supporter, and a vocal racist lashing out at minority groups of every kind.”
While many were quick to become concerned about the ability of artificial intelligence to become so hateful, there was a more terrifying thought. Humans were responsible for Tay becoming a bigoted AI bot.
There is an Internet adage called Godwin’s Law (1990) that applies perfectly to the scenario that unfolded with Tay. Godwin’s Law states that, “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches,” according to an October 1994 published by Wired entitled Meme, Counter-meme. That is, the likelihood that an Internet discussion addresses Nazis or Hitler increases over time until the likelihood is almost guaranteed.
But Tay took it one step further because, as an AI bot, it not only discussed racism, it became a racist Holocaust denier.
As with so many other cultural phenomena occurring this year, the intensity of Godwin’s Law seemed escalated by the political and social climate of the day. It only took roughly 24 hours for Tay to become an unmatched Internet bigot.
Tay’s racism is a display of what entertainment is in today’s society. And while the creation of her personality is largely due to a small coordinated effort, it is still concerning. Since the beginning of this election season, Donald Trump has been calling for a decrease in “political correctness,” but as was demonstrated last week, being too politically correct should not be a concern.
Trump, however, is not the cause of the Tay scenario but rather, like Tay, is a result of a culture that has not eliminated racism and bigotry.
Microsoft quickly took Tay off of Twitter on March 25, releasing a statement that said, among other things, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for nor how we designed Tay.”
Microsoft ended by saying, “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”
But the issue is not Microsoft’s to solve.
While it is true that Microsoft should have been more aware of Tay’s capabilities, the problem of racism and bigotry goes well beyond one AI mistake.