This story is part of Covering Climate Now, a global journalism collaboration cofounded by Columbia Journalism Review and The Nation strengthening coverage of the climate story.
The next time you use Twitter, search for “climate.” What do you expect to see as the top result? Content about climate change? Last year, during the year’s most important climate conference in Sharm el-Sheikh, Egypt, #ClimateScam was what you got instead. And that remained the case for months. Twitter has offered no justification for this active recommendation for climate denial.
This finding and many others were unearthed by a COP27 intelligence unit, comprised of analysts from over a dozen organizational members of the Climate Action Against Disinformation coalition. The unit tracked the most prominent mis- and disinformation narratives perpetuated during the conference. In January, they published “Deny, Deceive, Delay,” a report that details its findings.
Many high-traction posts uncovered by the unit supported a recent Ipsos study commissioned by French energy giant EDF that recorded an upward trend in skepticism and denial around the climate crisis. During COP27, denialism blended with conspiracy, as professional disinformation agents tied climate policy to an imagined plot of the World Economic Forum and global elites to impose totalitarian rule. Such falsehoods gained attention on Twitter, with posts receiving tens of thousands of likes and retweets. Elon Musk’s decision to strip away much of Twitter’s content moderation barriers has helped climate denialism go unchecked to millions of users. Yet the report also found that pro-climate hashtags tended to outweigh climate denial-related ones, which makes the appearance of #ClimateScam at the top of users’ search feeds is all the more confusing. Perhaps Musk could release the data to explain why #ClimateScam has been pushed to the top.
The report outlines several other narratives across the disinformation landscape. Previously debunked photos of private jets used at the conference spread around Twitter, creating a talking point to distract from the conference. Right-wing voices, including Mike Pompeo and Senator John Kennedy (R-La.) framed the possibility of a mechanism for “loss and damage” as “climate reparations,” or a means to unfairly penalize more developed countries by giving up their wealth to poorer, “less deserving” nations. Discredited claims about the unreliability of green tech, in favor of more “reliable” energy sources like coal and oil, were circulated as well. Several of them were also detected during COP26, and they should be at the forefront of climate disinformation-related content moderation in 2023.
Finally, the University of Exeter, as part of the coalition, analyzed Meta’s Ad Library and found that more than 3,700 ads linked to fossil fuel entities were active from September to November 2022, averaging about $3-4 million spent across Facebook and Instagram during that time. These ads practiced what has been called “nature-rinsing,” a greenwashing technique that uses nature-related imagery to enhance the “greenness” of a given brand. The ads also emphasized “energy independence,” a frequent technique used by Big Oil and its mouthpieces to mislead audiences on the dangers of the fossil fuel industry by exaggerating the benefits of expanding fossil fuel use.
To solve this problem, we need the same kinds of basic rules we have for most American businesses: transparency and accountability. Big Tech should open its algorithms to public scrutiny to assess if and how the algorithms are gaming the system. This is a strategy endorsed by President Joe Biden as a feasible bipartisan effort. Tech companies should also disclose data related to the spread of disinformation on their platforms, so analysts can effectively map trends in false and misleading content. Companies should also release data around their disinformation policies—like those around labeling and fact-checking. Big Tech also needs to strengthen the policies they already have in place. A conventional three-strike system that holds repeat offenders accountable is a good place to start. Lastly, platforms should adopt a standardized definition of climate disinformation to inform platforms’ content moderation policies, as Pinterest has done. These policy solutions would build trust between companies and the public regarding their efforts to track and mitigate mis- and disinformation.
At a deeper level, we must stop the monetization of disinformation, a dark new industry that is estimated at more than $2.5 billion per year, according to a recent report from NewsGuard and Comscore. A large amount of this is driven by Google and the ad tech industry, which funds the darkest corners of the Internet. Social media companies and publishers must stop allowing climate denial to be profitable. Musk and others oppose these measures, arguing that they interfere with freedom of speech, but that supposed conviction hasn’t stopped Musk from using his power to silence people who criticize him. We need a level playing field so that everyone’s voice can be heard, not just a radical minority’s.
Governments and multilateral institutions can also play a substantial role with these platforms. For instance, they can require basic reporting on harms caused—just as we do for every American product, from airplanes to zip lines. If there’s a crash, hand us the black box and pay for damages. The EU’s Digital Services Act is a solid example of legislation that, if implemented correctly, will reduce the spread of disinformation and better protect its users. It contains transparency and accountability mechanisms that require product harm reporting and seriously penalizes companies that don’t comply—to the tune of 6 percent of their global business. The United States has comparable legislation in the Digital Oversight and Safety Act, but still has a ways to go.
Climate change is an overwhelming crisis with complex, multifaceted solutions. The spread of climate disinformation doesn’t have to be. Humans wrote the code that amplifies this false content, and humans can fix it.