Are Facebook and Twitter Combatting the Problem?
British consulting firm Cambridge Analytica made waves by interfering in the 2016 presidential election, accumulating data from Facebook users under the guise of a “personality quiz”—you know, those silly question quizzes that tell you if you’re a Gryffindor or a Slytherin. Cambridge Analytica took that data and created “psychographic profiles” on users and their friends, targeting them with “fake news” in an attempt to discourage Hillary Clinton supporters by directing “Defeat Crooked Hillary” advertisements to their profiles. According to Facebook’s COO, Sheryl Sandberg, Facebook knew about this foreign interference in the election and the misuse of user information two and a half years ago—and did nothing.
The term “fake news” is now commonplace. People are concerned with where—and how—to find factual, informed news after the onset of social media. David Mikkelson, co-owner of Snopes.com, the major fact-checking site, told The Verge, “It kind of took weeks for things to go viral [in the past]—gave us plenty of time to look into them, write them up. Now somebody posts something outrageous on Facebook, and 20 minutes later it’s a headline on the New York Post. So obviously we have to be faster at what we do.” But the reality is that fact checking takes time, and with high-speed internet and the average attention span lasting onlyeight seconds, championing the truth is becoming next to impossible.
These platforms have moved from “new and exciting” to harsh realizations; access to all this instantaneous information has created the perfect storm in which misinformation can flourish. Fabricated information is often being shared as “news” without checks or balances to prevent the spread of fake news. Ev Williams, co-founder of Twitter and CEO at Medium, told The Verge, “When we built Twitter, we weren’t thinking about these things. We laid down fundamental architectures that had assumptions that didn’t account for bad behavior. And now we’re catching on to that.”
Margaret Gould Stewart, vice president of product design at Facebook, spoke with NPR to address these issues: “We’ve learned that we need to spend a lot of time thinking about what I call misuse cases—when people take tools meant to be used for good and do bad things with them. We need to spend more time thinking about what we should build, even though we might not be required to, and what we shouldn’t build, even if we’re technically allowed to.” That’s all well and good, but we know the difference between what big companies should doand what they actually do is often night and day.
One way Facebook has attempted to minimize viral misinformation is relying on Facebook users themselves to flag posts suspected of being false or abusive. According to a Facebook post in 2016 by Mark Zuckerberg, CEO of Facebook, they use these flags to help identify fake information. By the end of last year, however, Facebook admitted that the red flag icons for potential fake stories weren’t working. Now they’re hoping that the “related articles” section will help separate fake news posts from the truth by giving readers access to more information, who can then decide for themselves if it’s accurate. Although we understand the idea, the possibility remains that someone will read one article, then see three “related articles” which back the original idea and assume the original article must be factual. Even so, Alex Hardiman, head of news products at Facebook, is optimistic: “I don’t think it’s a losing battle, but I think it’s a really hard one.”
Facebook’s stance is that users should have the freedom to express themselves and seek out information on their own. As Zuckerberg effectively asserted in his personal Facebook post, “The problems here are complex, both technically and philosophically. We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.”
There are a few other test ideas, like a “More Info” button, which would allow users to see more information about the content publisher, but it’s still in the testing phase. Beyond this, the side effects of millions of unchecked, unrestricted accounts run deeper than the fake news epidemic. People dedicated to promoting hatred against a specific race, ethnicity, national origin, religious affiliation, sexual orientation, gender or gender identity, disability or disease have created a whole subset of followers through pages promoting said hatred.
A simple search in Facebook’s “Groups” tab using terms like “alt-right” or “white power” quickly reveals just how vast the problem is. There are even more alt-right pages under the guise of “Christianity,” like the “Christian Action Network,” which not only has over 30,000 followers—it’s a verified account. Facebook promised to double the size of its safety and security team to 20,000 employees this year, which includes content reviewers, but with over two billion Facebook users, it seems like a negligent attempt at fixing the problem.
Twitter, on the other hand, is taking a vastly different approach. While Facebook is concerned with giving users the freedom to search for the truth and sift through highly-biased, often hateful content, censoring or deleting only what they deem offensive, Twitter has begun removing content completely and deleting accounts. In December of last year, Twitter broadened their “hateful conduct policy,” and well-known alt-right organizations and users were deleted from the platform entirely. Unlike Facebook, hateful imagery is also something that falls under Twitter’s “sensitive media policy,” which includes “logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin.” Also unlike Facebook, Twitter has begun de-verifying white nationalist and far-right groups in response to user outrage.
The First Amendment protects our freedom of speech and expression, a human right that allows us to speak our truth without fear of retaliation or censorship—but how far is too far? This is where opinions differ. Steve Huffman, CEO of Reddit, stated at a SXSW panel that “Reddit’s role is to be a platform for debate. To let the ideas, and let these conversations, emerge and play out. That’s a really important part of the process in any political conversation.” Eddy Cue, the senior vice president for Internet software and services at Texture, disagrees: “People draw lines, and you’ve got to decide where you draw the line. We do think free speech is important, but we don’t think white supremacist or hate speech is important speech that ought to be out there. Free speech is important, but that doesn’t mean it’s everything.”
It may all come down to accountability. When Twitter users spoke up, the guidelines changed. Facebook is finally getting heat, and they’re actively intending to make changes, but the World Wide Web is simply too vast for any one entity to do all the work. Each user must do their part in actively seeking out sources to news stories they see online before reposting them, report hateful and offensive content, and take as much responsibility for the onset of viral fake news and hate speech as they expect from large media outlets. Like every other meaningful movement happening right now, it takes action to create change.
The Social Network Numbers
It seems like everyone is on social media these days—even our grandparents! Here’s a breakdown of the numbers.
87 million: Users whose data was misused by Cambridge Analytica
2.2 billion: Facebook users as of 2018
328 million: Twitter’s 2017 estimate of how many active users were on the platform per month
800 million: Instagram users as of 2018
80: Percentage of people who believe fake news is “a problem for democracy”
Editor’s note: As of the publication of this article, these facts are up-to-date; however, as with all things, that is subject to change. We appreciate your patience.