Last weekend, Facebook released the results of a psychological experiment it performed and came under heavy criticism for crossing what many people felt was an ethical line.
In 2012, Facebook conducted research to determine whether it could emotionally affect users by censoring subject matter coming from their news feed and prompt users to post more positive or negative content. It applied an algorithm to the accounts of more than 689,000 users that would omit words associated with either positive or negative emotions from its news feed.
The common complaint that provoked the study was that going on Facebook and seeing all the great and wonderful things other people are doing makes people feel bad about their own lives, at least according to analysis by the Huffington Post. This possibility certainly seemed worth investigating for Facebook, which feared it could lose users who were driven away by their friends’ negativity.
Adam Kramer, the data scientist who led the study, believes its findings disproved this. His report shows that people responded positively when negative content was censored out and vice versa.
Here’s the problem though. Those users were selected without their consent or knowledge. Is it worth manipulating users’ emotions simply to collect data?
The masses on social media didn’t believe so. Many argued that Facebook’s practices screamed of moral ambiguity and embodied Facebook’s indifference to the well-being of its users. Kramer assured concerned parties that effect on users was minimal.
Behind the ethical problem of Facebook’s right to toy with our emotions loom even more important questions. What is Facebook trying to gain?
The answer to the first is likely financial stability. How it intends to achieve this is the greater mystery. Facebook’s policies and business ventures have changed dramatically over the years.
An online interactive graphic created by IBM developer Matt McKeon shows changes in the social-media giant’s privacy policy and correspondingly how many people can easily view your information online. It’s interesting; a slider lets you view changes from year to year, and I encourage you to take a look. Early in 2005, only a user’s profile picture, name, gender, and networks were available to all Facebook users. By April 2010, the only information not shared with the entire Internet, by default, was a user’s contact info and birthday. Now in 2014, as stated by page three of Facebook’s privacy policy, features such as your friends list, pages you like, and networks you belong to are automatically public. These are non-negotiable and cannot be overridden with privacy settings.
Facebook is getting people to accept dramatic change with minimal opposition the cunning way. Gradually.
Furthermore, according an online Wall Street Journal article, in the first quarter of 2014, Facebook spent $2.8 million lobbying Congress on issues including privacy legislation, data, and cyber security and federal policies on issues relevant to technology. Just like dozens of businesses before it, Facebook spends large sums of money to bend the rules of privacy. Its new experiment marks an interesting growth of its ability to handle information: not just selling information but manipulating it — or at the very least, its ability to try on a massive scale.
If you change the people and change the laws, you change the country. Facebook is actively, and aggressively, shaping the United States to suit its preferred conditions. Because most of the change happens in the fine print, it takes active and alert people to ensure the country isn’t manipulated by overly bold enterprises — though, as millions of people continue to join Facebook and this writer continues to use it, it seems like quite the task.
Change should not be blindly feared, but it should never be blindly accepted.