Newsfeeds are changing the way we receive disinformation and our ability to find the truth, especially when it comes to climate change. So how can we protect ourselves from disinformation?
In 2018, a WhatsApp group had sprung up between my colleagues and me.
Our friends and colleagues had started to find strange things when searching for an initiative against hate speech and disinformation UN Global Compact on Migration in Marrakesh (Compact) online. YouTube searches returned far right influencers and conspiracies, over factual content. Many were saying that Compact was an “EU conspiracy” to make “migration a human right” and “jailable offence” if anyone was found to criticise it.
It should go without saying that this is, of course, wrong. Not only is migration already enshrined as an international human right, but the Compact was a voluntary UN initiative. And yet, the disinformation spread in many languages – French, English, Spanish, German – and was being spread by far right news networks across the globe.
What was worse was how mainstream media picked up on it. The British press linked the Compact to the EU, in order to align it with the toxic Brexit ‘debate’ on immigration. In the USA, it took a more conspiratorial tone with articles entitled ‘What they won’t tell you about the UN Compact on Migration’ or ‘The Truth about The UN Compact on Migration’.
All of this was funded by advertising.
From the video ads and banners on YouTube, to print adverts in the tabloid press. The hate and disinformation, the lies and the racism, appeared next to advertising from brands big and small.
The campaign hit home, and one by one, countries pulled out. Austria, Bulgaria, the Czech Republic, Slovakia, Poland, Australia and Israel. The attacks had been going on for months, long before we had noticed them. Eventually, the Belgian government collapsed due to infighting.
Disinformation had derailed the Compact.
The conspiracy went so deep that ‘Your Global Compact on Migration’ was written on the gun of New Zealand’s Christchurch killer, who massacred 51 people in an Islamophobic attack in March 2019.
How did we get here?
We used to think of disinformation as straight up denial from politicians, or corporate public relations to justify oil spills or forced labour. Then came the internet, and with it brilliant things – people held governments to account on Twitter, call out culture exposed hypocrisies.
But it also brought about data extraction, targeted advertising, and millions of sites with advertising inventory. And that was game-changing.
The first thing you need to know is our online spaces are geared to serving you ads. It’s how the platforms and news sites make money, your favourite writers, and YouTube stars. They all make money through advertising. And the more ads you watch, the more money they make. Advertising funds the internet, and it has shaped it in its image.
This changes how we receive information. Many of us now get news recommendations from our social media newsfeeds, from sources all over the world, rather than from one of five newspapers. And the platforms behind those newsfeeds are designed to keep us there, not necessarily to serve us fact checked information. They serve us information that keeps us scrolling, watching, or clicking. More of what we like, what we react to, what we’re interested in.
As you can imagine, this changes the incentives around content creation and curation. We’re often being served stuff that makes us react. Salacious content is preferred to boring facts – it keeps us angry, engaged, online. Recommendation algorithms learn what we like, and serve us more of it, we stay online and we watch more ads in the process.
This also creates filter bubbles – communities online that make us think everyone thinks the same as us. At it’s worse it radicalises many towards the far-right and conspiracies. That repetition, confirmation, it changes our outlook and our minds.
And advertisers are inadvertently paying for this. Our favourite brands are paying climate deniers or fascists at least $235 million a year, at a very conservative estimate, according to The Global Disinformation Index.
These factors are turbocharging climate disinformation, and in 2021 climate summit COP26 is under threat from the same disinformation campaigns that the UN Compact experienced. These campaigns are being pushed by companies with a vested interest against ending fossil fuels or the endless extractions that create the climate crisis.
Our research found 4 types of dirty strategies in use online:
The ‘big tobacco playbook’ which is, as you can guess, based on tactics used by big tobacco to deny or cast doubt on science and scientists. This is the corporate greenwashing, and ‘delay’ based messaging that tells us that we shouldn’t do anything until China acts. Or, that technology will save us. It gets nasty when it attacks individual scientists or activists, in an attempt to discredit them.
Conspiracy theories are explanations for events or situations that invoke a conspiracy by sinister and powerful groups when other explanations are more probable. Most of us know about QAnon, which has become entwined with anti-vaxx narratives. Chillingly, many of the influencers in these movements use similar arguments to climate groups. Except they place the blame squarely on scientists, instead of polluting corporations. Anti-science frames are easy to repurpose, meaning we’re concerned that people who have bought into COVID related conspiracy theories could be ‘sleeper cells’ for climate denial.
Culture wars are often used by political actors or media outlets to pit groups against each other. Are you a migrant or a patriot? A climate alarmist scaring our children, or a concerned and rational parent? These divisive strategies are characterised by dehumanising a group through name calling and reductive arguments which ridicule them. Anyone who has watched the Meghan Markle situation here in the UK, or seen the way ‘migrants’ are demonised in mainland Europe will have seen them at work.
Trolling & flaming are often paired with the narrative approaches above. Bots and trolls skew debates, create outrage, or amplify information on social channels. From the super influencers who spread 5G disinformation to their millions of followers, to paid people pretending to be ‘hockey mums’, these accounts can cause havoc and present extreme points of view as mainstream.
What can we do?
Get informed – we can all start to get acquainted with the different types of climate denial and delay messaging out there. Our new report is a great place to start, but we also recommend reading up on delay messaging, and delving into the wonderful world of Tofology on TikTok.
Ask questions – if you’re an advertiser or you work for a company that has an advertising budget, ask them what their policy is around climate disinformation. We shouldn’t be funding it, and we should make sure outlets that do promote factual and informed climate narratives are properly funded. Materials from The Conscious Advertising Network can help them do so. If you work for a social media company, or networking service, do the same. We need action around recommendation algorithms, direction to factual information, and the monetisation of climate denial.
Bite back – There’s lots you can do as a technologist, or a campaigner. Our guide is designed to help people understand how to fight back. We’re expecting to see this ramping up before COP26, imagine what can happen if we’re all prepared for it.
In June this year, The Lords Select Committee on Democracy and Digital Technologies reported that we face a ‘pandemic of misinformation’ that poses an existential threat to our way of life. Commenting, Lord Puttnam, Chair of the Committee, said:
“We are living through a time in which trust is collapsing. People no longer have faith that they can rely on the information they receive or believe what they are told.”
But our enemies no longer have the element of surprise. We know they are coming, we know their tricks, and we’re starting to understand how to fight back. Join us in the fight.
About the Author
Harriet Kingaby is co-chair of The Conscious Advertising Network and Insights Lead at Media Bounty. She’s a former Mozilla Fellow, working with Consumers International to map the harms and benefits of AI enhanced advertising across the globe.