Just over a year ago the worry that Facebook had substantially helped Donald Trump become president by allowing data miners to target voters’ emotions with fake news and fantastical ads was but a guess. If you voiced it the response was likely to be a shrug. Facebook ruled the world, period. Even muckraking media was torn. Pleasing Facebook’s algorithms now determined the survival of digital news enterprises, went common wisdom. To attack Mark Zuckerberg’s creation was to tilt at windmills.
So much has changed since. Zuckerberg just finished two days answering (and not) questions before a U.S. congressional hearing into the harm clearly done by his firm’s failures to protect users’ private data.
And even as Zuckerberg played apologetic before his interrogators, we learned more about his callow ruthlessness. “Companies over countries” has long been Zuckerberg’s mantra, revealed one repentant insider.
Wanting to makes sense of Zuckerberg’s testimony and what it portends for the future of Facebook and our society, The Tyee turned to professor Taylor Owen, assistant professor of Digital Media and Global Affairs at the University of British Columbia. Owen authored the highly praised book Disruptive Power: The Crisis of the State in the Digital Age, founded and publishes the international affairs website OpenCanada, and in December gave the 2017 Dalton Camp Lecture on the topic of “How Internet Monopolies Threaten Democracy.” We conducted this interview by email.
Tyee: What is the scope of the damage done to democracy by Facebook?
Owen: The business model of Facebook is to collect as much data as possible about the online and offline behaviour of its users (and even of those who don’t have a Facebook account), to use these data to create detailed profiles of each of their users, and then to sell access to micro-targeted audiences to anyone who may want to influence them. This can be a brand, a political campaign, a foreign government or a hate group.
The problem this poses for our democracy is that at the same time Facebook has become a de-facto public space. It is where many people get their news, where they engage with their community, and even how they interact with governments. But this public space is mediated by a commercial transaction. One that is incredibly susceptible to abuse. And the result has been a toxic civic discourse that I believe is harming our democracy.
In a more particular sense, my concern is that the laws governing our elections are very difficult to enforce on platforms. For example, we have limits on who can advertise during elections. On Facebook there no way to know how many people, from where, are purchasing access to which Canadian audiences. This leaves us incredibly vulnerable. Facebook’s new ad transparency initiative is a step in the right direction, but doesn’t go nearly far enough. It is also a positive that Zuckerberg now supports the Honest Ads Act, which they have long lobbied against. Public pressure is having an impact.
What is different this time?
Some things are different and meaningful.
First, Zuckerberg testified, it was covered around the world, and we are now talking about the structure of the platform economy and about data rights. This is new.
Second, Facebook now public supports legislation they have lobbied hard against (the Honest Ads act), and it will be far more difficult for them (and other tech platforms) to lobby against other sensible regulation (such as those around meaningful consent and the use of facial recognition).
Third, Zuckerberg acknowledged that Facebook is responsible for the content on their platform. This is a major change in position, and has significant legal and regulatory implications. They have long claimed they were neither a utility, nor a media company, therefore avoiding the regulatory burdens placed on each. This is untenable.
Fourth, senators and their staff are now zeroed in on the problems of platforms and the governance challenges posed. Those problems are going to get worse before they get better, so this is not going away.
Even some Facebook shareholders are calling for Zuckerberg to be removed as CEO. Has he, in the past days, including before the U.S. Congress, provided the reassurance needed? Should we believe him?
Zuckerberg has spent much of his career apologizing for breaches of privacy. From his early work on Facemash (a precursor to Facebook) at Harvard he has routinely pushed the legal boundaries and social norms of data collection. In 2010, he argued that the desire for privacy was an antiquated social norm. And in many ways he was right. Two point two billion people have made an implicit bargain to give up privacy in exchange for free services. So now, like before, Zuckerberg is saying “trust us,” we will do better.
The challenge is that it is now clear there are a host of negative externalities to this bargain, and to this scale of data collection. And so I am not sure that rebuilding trust will be sufficient.
Luckily, in a democracy we don’t have to just trust individuals and corporations to act in the social interest. Instead we incentivize them to do so through laws, regulations and meaningful penalties. And so this is the path I think we are now on. How are we going to govern digital platform monopolies?
For years Facebook maintained it was best to let it regulate itself, free of government meddling…
Publicly traded private monopolies don’t self-regulate. Now, in response to market pressure they could change their practices, and we are seeing some of that today. But ultimately Facebook will remain a surveillance capitalist company — they are in the business of collecting and monetizing data. If we as a society think that this practice has negative social or economic costs, then we need to talk about how governments can mitigate these harms.
What regulations are needed?
Since the internet touches on so much of economic and social lives, there are a broad range of policy needs. But I will mention three.
The first is around data rights. The new European Data Protection Regulation is a step in the right direction. It creates much stronger rules around consent for data collection, how data is used, and whether they can be deleted and taken back by the user. It also critically puts very large fines on breaches (up to four per cent of global revenue). Facebook has said they will roll out their new privacy tools globally (not just to Europe), but such data protection regulation is not just about tools and consent. It is about meaningful penalties for non-compliance (so we don't just have to trust companies). This is why legislation is needed in other jurisdictions, regardless of what Facebook rolls out.
Second, there are a host of potential reforms rules around online advertising, such as the Honest Ads Act in the U.S. But we really need to go much further and include an online archive of information clearly stating who bought which ads, the source of the funds, how much was spent, who saw them, and the specific targeting parameters that were used selected.
Finally, as Zuckerberg hinted repeatedly in his testimony (and questioners repeatedly cut him off while he was doing so) the big future governance challenge is artificial intelligence. In order to moderate content at scale, Facebook and other platform will increasingly use artificial intelligence (AI) tools to determine what is acceptable speech. It will be AI doing the governance. AI tools will increasingly remove humans from the process of platform governance and the application of legal and ethical norms. Governments will need to quickly figure out how they are going to hold these AI accountable.
Does it make sense to break up Facebook? Will that do much or anything to protect users, since the business models, and goals, will still be to monetize our data?
Anti-trust is a very crude tool developed for industrial corporations. Instead of heading down this path, there are a host of more targeted competition policies that could make an immediate difference. These include restrictions on the acquisition of up-and-coming competitors, structural separation of behavior tracking and ad targeting businesses, and consumer data portability from one service provider to another.
Is Facebook even fixable?
I don’t think that is the right question. I think Facebook, and other digital media platforms like Twitter and YouTube, are remarkably powerful tools that have had tremendous positive impact around the world. In countless ways they have been empowering and even democratizing. But as they have scaled, they have also had profound social costs.
The question I think we should be asking is whether they can sufficiently mitigate these social costs. I don't think it is enough to say that it is hard to moderate hate speech at scale. If their business isn’t scalable in a way that doesn’t harm society, then they shouldn’t scale. Ensuring this will require regulation.