CASE Forum

Facebook: How to Manage Data Responsibly?

Cambridge Analytica, fake news, unauthorized data sharing: the crises are piling up for Facebook. Is it time for the social media giant to rethink its responsibilities for managing user data?

  • Log in
  • Share
Note: Forum open until September 21, 2018


In January 2018, Facebook CEO Mark Zuckerberg announced a major change to the social media platform he founded. His stated purpose? To help users "have more meaningful social interactions."

"The first changes you'll see will be in News Feed," he wrote, "where you can expect to see more from your friends, family and groups (and) less public content like posts from businesses, brands and media."

The explosion of public content can be traced back to an algorithm change in 2009, which continued to be refined over the years, giving more weight to content that was "popular" in terms of having the most interactions and "engagement." The more popular the content, the higher it appeared in a user's News Feed. Many felt this was a move aimed at enticing advertisers.

Noting that "we built Facebook to help people stay connected and bring us closer together with the people that matter to us," Zuckerberg now said he was "changing the goal I give our product teams" to help "put friends and family at the core of the experience" because "research shows that strengthening our relationships improves our well-being and happiness."

Despite these heartfelt words, others had a less lofty interpretation. Many saw it as damage control in the face of mounting criticism. The rise of fake news, which spread like wildfire via the social media network, was increasingly suspected of influencing the 2016 U.S. presidential election. Though Zuckerberg had initially dismissed the idea as "crazy," by the time of his announcement he had admitted that he should have taken such claims seriously. By 2018, Facebook was making headlines for all the wrong reasons, coming under fire from users and regulators alike for its seemingly cavalier attitude toward data privacy concerns and its (mis)handling of controversial content.

In the Hot Seat
Data privacy concerns have been present since Facebook's inception in 2004. Its acquisitions of multiple tech companies over the years -- notably Instagram in 2012 and WhatsApp in 2014 -- have only fueled suspicion that user data was being cross-shared. Although Facebook insisted this wasn't possible, the European Commission fined Facebook $122 million for "providing incorrect and misleading information" about "the technical possibility of automatically matching Facebook and WhatsApp users' identities."

Users' lack of control over their personal information has always worried privacy watchdogs. Both Dutch and French regulators have levied separate fines on Facebook for failing to protect user data and for not adequately notifying users or obtaining their explicit consent for how their personal data will be used.

The straw that broke the camel's back was the news that the British consulting firm, Cambridge Analytica, had improperly harvested the personal data of up to 87 million Facebook users for political purposes. Exploiting a loophole that existed prior to 2014, a third-party app developer had gathered the data of not only Facebook users who had participated in its quiz, but also those users' contacts. The app developer then gave that data to Cambridge Analytica, which used it to build psychographic profiles of voters.

When Facebook got wind of this in 2015, it suspended the developer and Cambridge Analytica from its platform and took their word for it that they had destroyed the data. Since Facebook considered the case closed, it didn't bother notifying users or regulators about the incident.

And then the story broke in March 2018. After a long silence, Zuckerberg finally made a public statement, apologizing for "a breach of trust between Facebook and the people who share their data with us" and promising to "make sure this doesn't happen again."

Facebook subsequently updated its third-party data-sharing policies and announced it would audit all the apps that had accessed data prior to 2014. Though welcome steps, there was also a feeling it was too little, too late -- and only after getting caught.

Were Facebook's priorities mistaken? Indeed, the company was moving into ever deeper data analysis and artificial intelligence, with the intention of creating algorithms that could better predict what users wanted to see or experience, all with the goal of personalizing its service. But were duties being neglected in this process?

Certainly, with the world's largest database of human activity in its possession, Facebook has incredible micro-segmentation abilities. But what are its responsibilities regarding the control and use of that data? What changes should Facebook make to safeguard privacy and answer the critics? Is it time to regulate the social media network?

The case study "Facebook's Data Debacle in 2018: How to Move on?" by IESE professors Sandra Sieber and Robert W. Gregory, is available soon from IESE Publishing at


To post comments in this section, you must be a registered user.

Registered User of IESE Services

User Name

Forgot your password?

New user?

Register here