frank news is dedicated to storytelling across all mediums. A space for debate, discussion, and connection between experts and a curious readership. Topics are presented monthly with content delivered daily.

Founders

Tatti Ribeiro
Clare McLaughlin
Want to share your story?
Become a contributor
Contact Us
December: TBD
31st
No articles
30th
No articles
29th
No articles
28th
No articles
27th
No articles
26th
No articles
25th
No articles
24th
No articles
23rd
No articles
22nd
No articles
21st
No articles
20th
No articles
19th
No articles
18th
No articles
17th
No articles
16th
No articles
15th
No articles
14th
No articles
13th
No articles
12th
No articles
11th
No articles
10th
No articles
9th
No articles
8th
No articles
7th
No articles
6th
No articles
5th
No articles
4th
No articles
3rd
No articles
2nd
No articles
1st
No articles
© Frank

interviews

Don't Feed The Trolls

by Imran Ahmed
May 19, 2020

This interview with Imran Ahmed, the CEO of the Center for Countering Digital Hate [CCDH], was conducted and condensed by franknews

frank | To start, could you introduce yourself and your work?

Imran Ahmed | Sure. CCDH is an unusual organization. Our goal is to disrupt alliances between political actors and hate actors in digital spaces. Having experienced both the growth of antisemitism in the labor party in the UK, the vicious anti-Muslim and racist elements within the Brexit movement, and then having seen the rise of Trump six months later, we realized we were watching the rise of a new kind of political actor – one that intimately forms a symbiotic relationship with hate actors. We realized this new paradigm of hate actors was causing huge damage to our democracies and the liberal values that underpin it.

There are alliances between hate actors, and the political sphere transcends left or right or individual geographies. It is developing all across the world. It can go so far as Orban, who happily calls himself an illiberal Democrat, but we also see it with Modi, we see it with Duterte, we see it in Bolsonaro.

These alliances make sense for them. The political actors get armies of trolls who are going to harangue their enemies. They get people who create social proof for what were once considered fringe political beliefs. And they also get people who will go into the digital spaces, Facebook or Twitter, that we use to gain understanding of the world around us, and they'll post the material that turns people towards hateful extremism. And the hate groups, in return, get access. Look at the way in which trolls have literally been brought into the most senior places in our government, whether it's Trump embracing trolls in the Oval Office or Boris Johnson literally hiring a man named Andrew Sabisky, who had been training incels in sexual positions that they can use to dominate women. That is the kind of disgusting stuff that we've actually seen happening.

There are weaknesses in these alliances, however. They rely on a small number of people to play an outsized role. And often, they appear larger because our engagement pushes them up. The algorithms on social media platforms don't care about facts, they care about engagement. We try to capitalize on those weaknesses. Our "Don’t Feed the Trolls" project sought to teach people that actually engaging with hate online isn't something that should be rewarded as a virtue signal, it should be looked down on as actually spreading the disease. We've adopted that recently to Coronavirus, which is based on the same understanding of the algorithms on those platforms. It is actually endorsed by the UK government, and this morning we're talking to the United Nations as well about integrating it into their new anti misinformation scheme.

We also look at the groups in which hateful extremism is inculcated and we force those platforms to set precedents by shutting them down. We do that by working with celebrities who tweet out to people the kinds of stuff that we've shown Facebook and YouTube and Twitter, but that they refuse to take down. As soon as they have the public pressure from people realizing what they tolerate on that platform, the companies move very quickly. All in all, what we are trying to do is resocialize social media using a variety of different techniques based on evidence and research.

Will you define a ‘hate actor'?

Hate actor is an all encompassing term that takes into account both groups and individuals. We realized that the old analysis of hate groups focused on offline, in person networks. Social media now allows people to build consensus based communities very quickly, for no cost. It's these online communities that form the modern hate group. And there are individuals who act as amplifiers and as central nodes in the online systems of hate. That includes people from David Icke and Tommy Robinson in the UK, to figures like Milo Yiannopoulos and Lauren Southern in the US. 

You work in both the UK and the U.S., I'm curious about how you see misinformation coming up. The U.S. has institutionalized some hate actors. Fox News is on cable. Is there an equivalent in the UK? Do you approach the information differently because one is coming from a ‘journalist’, while the other comes from civilian online participants?

We do look at the information ecosystem in each country and examine how they differ from each other. In the UK, for example, Fox News wouldn't be possible because of OFCOM rules. But Australia does have an equivalent to Fox News. 

Across countries, however, we find that broadcast is less relevant. Fake news sites like Breitbart have huge readerships, quite often as much as any newspaper. They preach a gospel of hatred and misinformation, and often, quite structured misinformation. For example, there are Islamophobic sites which will pump out stories day after day after day about Muslim rapists. If your entire output is about one particular thing, it gives you an inclination as to what it is that they're trying to make you feel. Those sites are particularly difficult because they're not illegal speech per se, but they nevertheless are disgusting. 

What is the line between something that is hateful or negative and something that is actually threatening to democracy?

It's an increasingly blurred line. Conspiracy theories sit in that gray area.

Is it illegal to say that 5G causes Coronavirus? Absolutely not. Should the state legislate on that? Absolutely not.

But we as members of civil society have the right to say to an advertiser, "Hey, do you realize that your adverts are appearing on this insane site?" We have the right to go to Facebook and say, "Are you sure that you want to broadcast this material on your platform to billions of people?" And in return, Facebook has a right to say, no, we don't. Facebook is not part of the state. It's not subject to the first amendment. It's an economic actor. We, as members of civil society and as a society as a whole, have a right to say, if you want to profit from us and host this kind of stuff, we will hold you to account for it morally. It's a moral approach, not a legal one.

The state now is spreading externally created misinformation. Is this unique to crisis? Or have conspiracy theories always been a part of political parties and movements?

One of my favorite things to do in a presentation is to ask people which vegetable it is that helps your eyes. Everyone always says carrots. But that was actually created by British intelligence in World War II as a way to hide from the Germans that we developed plane mounted radar. We planted a story in the newspapers in the UK saying that carrots were the reason for the accuracy, not the radar. That endures as a myth to the 21st century; seventy years later we're still telling our kids that they should eat carrots to have good eyesight. People don’t fall for conspiracy theories because they are stupid. Conspiracy theories are based on leap of faith, and we make leaps of faith all the time. Around half of British people believe one or more of the most frequently cited conspiracy theories, whether it's that 9/11 was an inside job or that the moon landings were faked. 

However, it's been turbocharged in this modern environment. And political actors, especially fringe actors, pick up on these techniques. They've been toiling in the mire, on the fringes, ignored and unloved for decades. Suddenly a tool comes along that might give them purchase with the majority, of course they are going to use that. It is an important way in which they reconcile beliefs that they know to be unpopular or untrue. Whether it's extreme Trotskyite groups on the left or it's far right groups on the right, they will use conspiracy theories to create their own versions of reality and history in order to try and grow their audiences.

Right. Many of the platforms being used to spread misinformation were left alone under the assumption that regulation stifles innovation. Do you think it’s possible to create restrictions and boundaries retroactively?

What CCDH seeks to do is put the pressure and the spotlight directly on the social media companies and their failure to enforce their own terms of service and their own claims of intent. Their actions often diverged wildly from their claims. We seek to hold them to their own word. They make these claims because they know that if they didn't say that they would address violent extremism, child sexual exploitation, coronavirus misinformation, people would say these platforms are toxic and they'd discourage their children from using them. 

Our goal is not to destroy these companies, or to stop them from making money, it's to help them become sustainable over the long term by detoxifying their platforms. And no one should or would allow their children to go onto platforms which contain material like this? No one would want to have their information environment skewed by bad actors. Everyone seeks to receive the best information possible. If these platforms are failing to deliver what consumers want, they're going to die in the short to medium term. If they fail to create sustainable businesses, they're going to be replaced.

What is the pushback then? Why the hesitation to be harsher on misinformation? Is it purely financial?

I think it is economically driven. These companies are driven by the peculiar vulnerability of being disruptors in a sector and in a time of incredibly compressed business and economic cycles.

They know they will be disrupted one day. They're like NFL players. They know they've got a limited time to earn.

So they're going to sign every endorsement, and they're going to take every buck that they can, because they know that in 10 years time the game is over for them. What we've sought to do is put a staying hand on their shoulders and say, come on, actually there's a whole world out there for you. This is not the NFL. You are sustainable businesses. Facebook has a monopoly position. The barriers to entry to that market are now absolutely insurmountable. You will be able to run for a long time.

If you look around the world, liberal democracy is in retreat, elections are under threat, there has been the first genocide planned on Facebook in Burma, which they themselves have accepted partial responsibility for, there is coronavirus misinformation, and yet they still haven't taken action.

We hope with the Coronavirus we will be able to force them to take action, but I think there also needs to be the credible threats of regulatory action by governments saying that if you don't take action, there are two things that will make you do.

The first thing is we'll enforce rules. We've suggested the possibility of creating new torts in law so that people could take action against platforms if they are harmed by content that platforms have allowed to stay up on their sites, despite being aware that it was problematic. The second is we will create new levies in Europe and in the UK, where companies must pay taxes for the social harms that they cause. There are material harms that result from misinformation. We've published research recently by Daniel Allington, of King's College London, which shows that people that believe the 5G conspiracy theory, for example, are less likely to physically distance, less likely to stay at home, and less likely to wash their hands. But of course, why would you do any of those things, if you thought that a tinfoil cap could protect you from COVID? I think that this may be the catalytic event that actually starts to put pressure on social media companies, sufficient enough that they themselves make the changes that they know they need to make.

Do you think the media has the same responsibility as the individual to stop sharing misinformation, or addressing it?

Absolutely.

How do you talk to journalists about that? 

One of the things that we've already done is spoken to broadcasters in the UK to help them understand the underlying logic that underpins these platforms. And to help them understand that what can look like a widely held belief circulating through social media, actually often stems from a highly dense network of professional trolls who understand the dynamics of social media and make it look as though lots of people are saying "X." And of course it then gets reported as Twitter said "X". 

Giving a platform to these ideas is in itself so damaging because it allows these fringe political movements to achieve social proof, and social proof is a central part of influence and extremism.

People are much more likely to adopt extremist beliefs, if they see social proof, if they believe that they are reflecting a larger pool of belief. The media really can play a catastrophic part in amplifying narratives from the fringe.

There is also an obsession with appearing equal for the sake of fairness that gives life to things that are just not true. What is Twitter's role in amplifying the voices of planned conspiracy?

If I said to you now the sky outside is green, you'd say to me, what are you talking about? What has happened? Has there been a chemical leak in London? Are you colorblind? You've started engaging with my statement. If someone says something which is a bit wacky, a bit extreme, it draws engagement. No one retweets the UN or the NHS or the US government because that's just factual information coming from civil servants. And, again, these platforms fundamentally reward engagement, not accuracy.

[Laughing] Wacky hot takes on Twitter are going to take down democracy. We need to punch-up the UN’s Twitter to combat entertaining trolls.

Right. We've got a hell of a battle ahead of us. 

I think that the failure of social media companies to live up to their claims around fighting misinformation around COVID has been a real strategic mistake by the social media companies, and one they won't get away with it for long. Perhaps they believed Donald Trump and thought that the crisis might go on for weeks, but this crisis is not going to be gone for a long time, and they'll be held accountable for the damage that they're causing to our society.

This time, damage is not measured in terms of votes, it's measured in terms of lives lost.