Taylor Owen gets a little sick of talking about Facebook sometimes.
These days, however, conversations about Facebook are hard to escape. The social media giant has been under intense scrutiny over the past several weeks, thanks in large part to the stunning revelations from Facebook whistleblower Frances Haugen. When it comes to the spread of hate and misinformation on a massive, global platform — which has real-world, offline consequences — people are asking: how did we get here? And what does accountability look like?
That’s where Owen comes in.
Owen is a Canadian expert in digital media ethics. He is the Beaverbrook Chair in Media, Ethics and Communications, Associate Professor and founding director of the Center Media, Technology and Democracy in the Max Bell School of Public Policy at McGill University in Montreal, where he is also an associate professor. He’s also the host of Center of International Governance’s Big Tech podcast, and is a regular commentator for the Globe & Mail.
(That’s just part of his CV. Put another way, he’s the guy to talk to about Facebook.)
On Tuesday afternoon, Owen will be joining Walrus editor-in-chief Jessica Johnson for a timely, virtual discussion about rights in the digital age and whether Facebook is a threat to democracy, hosted by the Canadian Museum for Human Rights. This free, 90-minute event will be live on Zoom, beginning at 2 p.m. Register at humanrights.ca.
In advance of the talk, the Free Press spoke with Owen about, yes, Facebook, what a stronger social media ecosystem might look like, and what role government can play in shaping it. This interview has been edited for length and clarity.
FP: Do you think there’s actually anything that can be done to make these social media platforms better?
TO: Oh yeah, no question about it. I mean, we’ve governed big, complicated international companies many times before in the past. I think Haugen herself constructively points our conversation to two things that I think we should be doing immediately, which is demanding through through our laws, much more transparency into how these companies operate, and then we should be able to see much more data about how different algorithms within platforms are behaving, what they’re recommending — all of these kinds of things that we now know the company studies and knows themselves should be public. These are things we should be demanding be made public to researchers, the public, to regulators. We need more transparency into these systems — just like we demand in other industries. We don’t let pharmaceutical companies develop drugs in secret, they have to show all their data.
But also she’s saying ‘look, transparency isn’t enough, you also need better accountability, here.’ And there’s been a certain impunity these companies have behaved with that sort of is disconnected from the impact they have on society.
FP: I think, as Canadians, we tend to view the harms of Facebook and other social platforms through a North American lens. I don’t think we think about how Facebook is used in other parts of the world.
TO: This is a huge piece of this. This is actually something that’s come out in the Haugen documents, too, because we now know that 90 per cent of the moderation resources are devoted to the United States, and 90 per cent of their users are outside of the United States. So, there’s a huge disconnect between the effort put in place to alleviate some of these harms via content moderation, and where users actually are.
What’s worse, the most egregious examples of these harms exist in countries where there’s very, very little moderation at all. And if we care about looking at this from a human rights lens, for example, or a rights lens writ large, then that discrepancy should be really worrying.
"The most egregious examples of these harms exist in countries where there’s very, very little moderation at all."
FP: What about the tension between the rights and laws of individual countries, and Facebook, where everyone lives? I’m thinking specifically about the concept of free speech as it exists in America versus freedom of expression here in Canada.
TO: This is the core governance challenge, which is we develop most of our laws and norms around speech domestically. And these companies are trying to operate globally with one standard norm of free speech that’s largely rooted in the First Amendment in the United States, which is a particular version of free speech that isn’t shared, certainly by other democracies, and definitely not by illiberal-leaning regimes around the world.
And so, one of the tensions that we don’t talk enough about is, it’s one thing for us to say in Canada that we want our speech laws imposed on these platforms because we might prefer our own regulations and laws. But if I’m a citizen in an illiberal regime, if I’m in Hungary at the moment, for example, or any number of illiberal-leaning countries, I might prefer Facebook’s notion of free speech. Who are we in our societies to say they don’t have a right to choose that option and instead, they should have to live by the more restrictive speech regimes of their governments? There’s a real challenge there, there’s no question about it. The problem is, we’ve rolled out a technology that wants one set of rules for everybody on the planet, and that’s not how the planet works.
FP: How do we start thinking about this as a problem to be solved versus an insurmountable, impossible challenge?
TO: The key thing is by doing, in part, what we’re doing now in Canada, which is developing governance ideas and proposing them, like the government has, and having a debate about them — which is good. There aren’t easy answers here, right? So I think it’s great that the government proposed a set of legislation for online harms, for example. And I also think it’s great that it’s being challenged and picked apart by civil society, by experts — that we’re having this debate is really productive, rather than just pretending like this can’t be governed and it’s unsolvable problem — which, frankly, for a decade is what we’ve done.We’ve kind of put our heads in the sand and said, ‘Look, there’s all sorts of good things about these platforms, and it’s really hard to figure out how to govern them to minimize the risks and harms, so let’s just let them figure it out.’ And that’s got us to where we are now, this idea that this should be a self-regulatory environment, and the government could cause more harm than good by intervening too heavily, therefore, let’s leave it to them. And it’s pretty clear, that’s not sufficient.
There aren’t easy answers here, right? So I think it’s great that the government proposed a set of legislation for online harms, for example.
FP: I know that Instagram makes me depressed and that Facebook makes me angry and yet, I’m still on all of them, as are millions of other users. How do we make the public care about these harms?
TO: I think people’s views on this are changing. I think people are more aware of those harms than they were even two years ago, I mean, so I think there’s an increasing awareness of these harms, because of civil society, because of journalism, because of all sorts of things that are increasingly bringing visibility to these problems.
The question of how you get them to accept those trade offs into care is kind of a governance question. Not everything is about individual agency. Sometimes we need collective responses to things that we deem as a society are creating social harm. To use the drug analogy, we don’t expect everybody to review the approvals of a COVID vaccine, we trust that we have a regulatory agency that does that for us, and that’s as it should be. Everything isn’t about individual choice and leaving it up to the market dynamic to decide… even if we were to say it is about individual agency and the market, it needs to be a functioning market to give us enough choice to actually have options. And if the market isn’t functioning, and there really isn’t a ton of choice in it, individual agency doesn’t really matter.
Like, it’s very difficult if my son’s school uses Facebook to communicate with parents, it’s pretty big cost for me to say, ‘I’m going to go off this tool, because I don’t like what it does to democracy in another country.’ And that’s not a reasonable trade off. That’s where we either need more choice in the market so I have real choice as a citizen, or we need governments to minimize the harms of using that product to begin with.
Sometimes we need collective responses to things that we deem as a society are creating social harm.
FP: Tell me, Taylor: do you use these social media platforms?
TO: Some of them, off and on. I stopped using some of them and still use some of them. I try and be more aware about how it’s making me feel and act and behave. I think that’s kind of a first step, here, as an individual.
I stopped using Facebook. I wasn’t enjoying it. And when I stopped I didn’t lose anything. In fact, I felt better. Twitter I use a lot, but I often check what my emotional responses are to things I see… I think just being more aware of those things is really important, and that’s the literacy aspect of that. How is the design of these tools affecting what I consume, how I feel about that consumption, how I react to what discourse I’m participating in — these are all things that are artifacts of the design of the system. That changes, I think, how you engage with it.