After terrorists stormed a mosque in New Zealand in March and livestreamed the massacre of 50 people on social media, countries around the world pledged to fight online hate.
Canadian activists joined the “Christchurch Call” for action and urged the federal government to crack down on violent and hateful social media content.
And Prime Minister Justin Trudeau is flying to Paris on May 15 to attend a meeting of government leaders and representatives from the social media giants on dealing with online hate.
But what could a Canadian approach look like? What can we learn from other countries? What mistakes should we avoid?
The Tyee spoke to two experts to find out.
1. Be clear on what we’re calling hate speech.
One of the biggest challenges in addressing online hate is figuring out what it is.
Under the Canadian Criminal Code, it’s an offence to promote hatred against any identifiable group, or advocate genocide.
Chris Tenove, a postgraduate researcher at the University of British Columbia, said distinguishing between speech that promotes hatred and opinions that are ugly or unpopular is vital.
“If you’re too loose with what you’re saying counts as hate speech, and then you start talking about the need for taking criminal action on it, some people interpret that as a very concerning attempt by government to criminalize quite a lot of speech,” said Tenove, who is studying political theory with a focus on global governance and digital politics.
Tenove points out that even political leaders are using language online that, by some standards, might be considered hate speech.
“Even amongst politicians and political elites, there seems to be more language that is skating closer to hateful or harmful speech than in the past,” he said.
Last month, Motherboard reported a Twitter employee’s comments that algorithms aimed at automatically detecting and deleting white supremacist content could also delete posts by several Republican politicians. Society would not likely accept that kind of censorship, the employee suggested.
The risk is that unpopular or ugly — but legal views — could be censored as a result of efforts to stop hate speech.
2. Be transparent.
In 2017, Germany passed the Network Enforcement Act, an anti-hate speech law often referred to as the Facebook Act. It set out fines of up to $75 million for social media companies that failed to take down obviously illegal posts that promoted hate within 24 hours of receiving a complaint.
The law has been controversial, with critics arguing companies could chose to err on the side of caution, resulting in reduced free speech.
But both sides agreed the law should require the companies to report how many posts they removed, allowing the public — and researchers — to analyze the effectiveness and impact of the policy.
Heidi Tworek, a UBC history professor who studies the impact and politics of mass media, said any policies need to ensure companies are open about their activities in dealing with complaints of hate speech.
“We can’t as researchers do research unless we have the evidence,” she said.
Tworek and Tenove, who collaborated on a submission to the Public Policy Forum on regulating online hate speech, say there’s not enough data on the frequency of online hate or how it affects marginalized groups.
“We don’t have hate speech or the infractions of the terms of service to tell us what’s happening in Canada,” said Tenove.
Without that base research, it’s hard to know whether a policy is working, he said.
“How do we measure is a really important question,” explained Tworek. “How do we actually quantify what is hate speech, and how do we know which measures are having an effect?”
3. Consider approaches beyond fining the companies.
Germany’s law made headlines for forcing companies to take down hate speech or suffer heavy fines. But fines could create a false sense that a policy is working when in fact companies are censoring legitimate debate to avoid any risk.
“Some laws might cause companies to over-delete things that aren’t necessarily contravening the law,” Tworek said.
Tworek said her research found the German law resulted in the removal of more than 28,000 items from Twitter last year. Facebook removed only 362, but its complaint page was hard to find, demonstrating one way companies could avoid fines.
Tworek and Tenove recommend using other tools to limit the spread of hate speech, including revamping Canada’s Human Rights Act to return parts of Section 13. The clause, repealed by the Harper government in 2013, had made it an offence to communicate in ways that were likely to expose “a person or persons to hatred or contempt.”
Section 13 was controversial for its perceived infringement on freedom of speech. But when it was repealed, complainants lost a quicker, more accessible alternative to criminal complaints in responding to hate speech online. Some experts recommend revisiting the clause instead of changing criminal law.
“Having a human rights framework sets a larger ambit for what constitutes hate speech, but doesn’t include potential jail time,” said Tenove.
4. Take a go-slow approach.
The European Union showed the risk of rushing to bring in new rules when it created a “Code of conduct on countering illegal hate speech online” in 2016, said Tworek. The code calls for participating social media companies to remove illegal posts within 24 hours of a complaint.
But the process was rushed and secretive, according to some groups that withdrew from the talks.
Without their contributions, Tworek said, it’s hard to measure how marginalized groups are affected by online hate speech and whether it’s discouraged them from engaging online.
“Bringing civil society organizations to the table is extremely important in creating buy-in,” she explained.
She and Tenove advocate creating a moderations standards council to ensure a co-ordinated, informed response to online hate speech. The council would bring together companies, governments and non-profits to ensure a consistent approach across platforms.
Tenove said policymakers should avoid making online hate speech a partisan issue when considering things like changing the Rights Code.
“It would be a shame if it were immediately polarized into a partisan issue, and I think there’s a real possibility of that,” he said.
5. Avoid policies that can be twisted to censor free speech.
Tworek said certain measures might seem like a good idea now but could be used to censor free speech in the long run.
“Something that might seem fine in France under [President Emmanuel] Macron does not look good in France under [right-winger Marine] Le Pen,” she explains. “These rules need to be designed for future governments as well as the governments of today.”
For example, she notes some of the countries with the most aggressive laws against fake news are authoritarian states less interested in fighting online disinformation than having the power to censor critical views.
“One of the questions we need to be asking ourselves is, ‘Are you designing laws and regulations that are going to be preserving freedom of speech as far as you possibly can, and not give tools to an authoritarian regime?’” she said.
6. Tackle hate offline, too.
Curbing online hate speech won’t solve the root of the problem, said Tworek. Governments need to understand why people are adopting hateful views and address the cause.
“It is a danger to think that you can solve all problems in a society by clamping down on certain types of speech,” she warned.
“There isn’t as much of a discussion about what might be the real-life economic, political and cultural conditions that might be fostering this online speech.”
Tenove said the debate is a chance to encourage civil society groups, and all Canadians, to address the problems in our political discourse.
“Social media companies are holding a mirror up to us, and we’re seeing more expressions of these views than we thought.”