Is Facebook fulfilling its mission of bringing people together or is it splitting them apart?

Facebook’s mission is to “Give people the power to build community and bring the world closer together”. However, the way Facebook employs machine learning algorithms promotes a narrow view of the world to its users. How can Facebook improve?

Facebook’s mission is to “Give people the power to build community and bring the world closer together” [1]. To achieve its mission and to connect more than 2 billion people around the world, Facebook employs machine learning algorithms to drive all aspects of its user experience, from ranking posts for News Feed and deciding what ads to show to which users, to classifying photos and videos. [2],[3]

In particular, Facebook learns about its user interests and preferences by analyzing how they interact with content, other users, pages and businesses on the platform. It then ranks the content on the News Feed and it generates revenues by allowing advertisers to target users with the most relevant ads in an auction-based system. To do so, Facebook needs to make sense of large quantities of data, which are mostly unstructured. This is where deep learning (a subfield of machine learning) comes into play, with techniques that allow data to self-classify. Textual analysis, facial recognition and granularly-targeted advertising are all examples of how deep learning supports Facebook’s mission. [4]

In 2018, there are 1.5 billion users logging into the platform each day. When Facebook was launched in 2004, most of the content was in the form of text. With the introduction of images, audio, videos and other rich media such as 360 degree photos and videos, analyzing all these data has become an increasingly complex task. The large quantity of data, its nature and its complexity require Facebook not only to master machine learning and artificial intelligence as internal core capabilities today but also to heavily invest to advance the research on the field in the longer term. At Facebook, the Facebook Artificial Intelligence Research group (FAIR) led by Yann LeCun, neural network expert, is tasked with advancing the longer-term academic problems surrounding AI, while the Applied Machine Learning group (AML), led by Joaquin Quiñonero Candela, is charged with integrating the research into Facebook’s products. As Mark Zuckerberg once put it, “One of our goals for the next five to 10 years, is to basically get better than human level at all of the primary human senses: vision, hearing, language, general cognition.” [5],[6]

But is Facebook really getting better than humans or is it carrying and amplifying all of our biases?

Nowadays social media platforms such as Facebook have become one of the primary sources of news and information. A survey conducted by Pew Research Center surfaced that as of 2017, 67% of Americans get at least some of their news on social media, up an absolute 5% from the year before and largely driven by increases among the older and less educated segments of the population. Specifically, 45% of Americans consume news on Facebook. [7]

Moreover, the way social media allows users to interact with content has led these platforms to become the new “public sphere”: a place where people freely discuss key social and political issues. Facebook, however, is a company and not a public space where all content has equal importance. With the intent of providing relevant content and ads to its users as well as maximizing their engagement, Facebook deep learning algorithms dictate what each user is allowed to see by serving billions of personalized News Feeds. More specifically, Facebook developed a recommender system called “Collaborative Filtering”, through which it serves content based on the preferences of like-minded people with similar tastes and socio-demographic background. With regards to news, this means that each single user is more likely to see the content that best aligns to her or his own views. This, exacerbated by the psychological tendency of human beings to search and interpret information in a way that confirms pre-existing beliefs (known as “confirmation bias”), effectively turns Facebook into the largest “echo chamber” in the world. [8],[9]

The way Facebook employs machine learning algorithms promotes a narrow view of the world to its users. And when the plurality of opinions as a backbone of democracy ceases to exist, the society becomes more polarized to extreme points of view, making it harder to reach a common agreement.

Finally, when a for-profit company, and not a regulated news organization, becomes a primary channel to distribute third party information (with little or no oversight), the possibility of abuse is almost endless. As a matter of fact, recent events have shown how social media, especially Facebook, might have played a role in key political events such as the US presidential elections, by expanding the reach of fake news. [10]

In the future, should Facebook take the responsibility of detecting and eliminating fake news, as well as that of providing the opportunity for its users to be exposed to dissenting views of the world, effectively promoting democracy? Can artificial intelligence help? If so, how?

(790 words)

References

[1] Investor.fb.com. (2018). Facebook – Resources. [online] Available at: https://investor.fb.com/resources/default.aspx [Accessed Nov. 2018].

[2] Statista. (2018). Facebook users worldwide 2018 | Statista. [online] Available at: https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/ [Accessed 11 Nov. 2018].

[3] Facebook Research. (2018). Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. [online] Available at: https://research.fb.com/publications/applied-machine-learning-at-facebook-a-datacenter-infrastructure-perspective/ [Accessed Nov. 2018].

[4] Marr, Bernard. 2016. “4 Mind-Blowing Ways Facebook Uses Artificial Intelligence”. Forbes. https://www.forbes.com/sites/bernardmarr/2016/12/29/4-amazing-ways-facebook-uses-deep-learning-to-learn-everything-about-you/#3ff2e8acccbf.

[5] “Inside Facebook’s AI Machine”. 2017. WIRED. https://www.wired.com/2017/02/inside-facebooks-ai-machine/.

[6] “Inside Mark Zuckerberg’s Bold Plan For The Future Of Facebook”. 2015. Fast Company. https://www.fastcompany.com/3052885/mark-zuckerberg-facebook.

[7] “News Use Across Social Media Platforms 2017”. 2017. Pew Research Center’s Journalism Project. http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/.

[8] “Explainer: How Facebook Has Become The World’s Largest Echo Chamber”. 2018. The Conversation. https://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024.

[9] “Recommending Items To More Than A Billion People – Facebook Code”. 2015. Facebook Code. https://code.fb.com/core-data/recommending-items-to-more-than-a-billion-people/.

[10] Guess, Andrew. Nyahn, Brendan. Reifler, Jason. 2018. “Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign”. http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf

Image Source: Fossbytes

 

Previous:

Steam-powered ideas: a market for open innovation

Next:

3D Printing Toyota’s Headlights: The Sourcing Decision

Student comments on Is Facebook fulfilling its mission of bringing people together or is it splitting them apart?

  1. Tom,

    Great piece on a very relevant and controversial topic. The issues that Facebook and other social media companies face are essentially unchartered territory, so it is very difficult to provide an airtight argument backed by historical lessons or academic research. Instead, we are faced with a more “learn as we go” process, especially when it comes to the do’s and don’ts of machine learning and its influence on social media.
    Your last question about managing dissenting views is intriguing, and I also read the article – “Explainer: How Facebook Has Become The World’s Largest Echo Chamber” to get a sense of how and why one might feel that Facebook has more of a social responsibility. One part that struck me was that many of Facebook’s algorithms run off the individual’s personal decision making. For instance, if I don’t engage with someone that holds an opinion different to mine, then that person’s contributions get removed from my newsfeed. If I engage with someone I do agree with, I contribute to building my own echo chamber. I agree this could be problematic, but the algorithm simply responded to my own actions. If I am a person that enjoys debate, my newsfeed might be quite the opposite of an echo chamber. In other words, we create the echo chamber, not necessarily the algorithms. This leads me to my final point – having a private company trying to manage the direction of discourse, even if its for a “supposedly” good purpose, is a little concerning to me for a number a reasons. There is just too much room for error and subjectivity on behalf of the private company. Instead, I think it comes down more to personal accountability and choice. If we are concerned about echo chambers, then perhaps we need to change our own habits vice having a private company try to change them for us.

  2. I imagine it would be very challenging to use artificial intelligence and machine learning to detect when content qualifies as fake news. To effectively identify fake news, Facebook’s algorithm would need to determine the veracity of claims posted on Facebook — and then make a judgment call whether the inaccuracy warranted deleting the post. This would be especially challenging during discussions in which the facts are not mutually agreed upon, and I fear a policy that calls for the elimination of “false information” would lead to stifled conversation.

    I, do, however think it would be possible for Facebook to regulate the creation of fake accounts using machine learning. Robust machine learning algorithms could verify the identity of each individual creating a Facebook account which would eliminate the likelihood that one individual is creating multiple Facebook accounts for the explicit purpose of widely spreading false information.

  3. To give my opinion on the question you pose at the end, I do think Facebook should try to detect and eliminate fake news. From a business perspective, Facebook will lose the trust of its users if it builds a reputation for promoting fake news, being divisive, and only showing extreme points of view. In addition, advertisers (where Facebook makes money) may not want to be associated with a platform that promotes extreme points of view and is seen as a detriment to society. While Facebook hasn’t experienced these consequences yet, as Facebook competes with other websites (i.e., Google Search, Amazon) and social media companies, the risk will increase. I do think machine learning will play an important role in policing content on Facebook because of its advantages over humans in pattern recognition.

Leave a comment