Misinformation: what are social media platforms doing to stop it?
Misinformation — defined as misleading or incorrect information — has generated much interest these past few years. First came Trump’s weaponisation of misinformation on Twitter, then the COVID-19 anti-vax movement, and this week, alarm reached Parliament as we saw a concerned father accuse the social media giants of facilitating the spread of “malicious online lies” about his daughter.
With growing awareness of misinformation, particularly social media’s role in generating it, platforms like Twitter and Facebook have been forced to combat its spread. While common in their aim, misinformation policies vary widely across organisations, as does their effectiveness.
In March 2020, in response to the emerging COVID-19 crisis, Twitter introduced a raft of new measures aimed at stopping the spread of misinformation. This included broadening the definition of harm to cover more instances of misinformation and increasing machine learning technology to identify misleading tweets. In May 2020, Twitter introduced new tweet labels linking to fact sheets on COVID-19. The warnings applied to tweets that conflicted with public health advice and allowed users to report tweets for misinformation.
More recently, in January 2021, Twitter began trialling “Birdwatch”, a community-based approach to misinformation. Currently available only in the US, users can add notes or context to a tweet that they believe contains misleading information. It can include anything from personal opinion to links to verified information sources, and users can also rate the quality of notes made by others. The desired outcome is a crowd-sourced “consensus” on the veracity of the tweet, which will be displayed publicly alongside the original tweet.
Facebook has been a focus for many lawmakers when it comes to combating misinformation. It’s no surprise this is reflected in a comprehensive list of misinformation tools. At the crux of Facebook’s strategy is its fact-checking program, launched in late 2016. This third-party organisation is responsible for reviewing potentially misleading content, labelling it accordingly and, if necessary, providing information that debunks false claims.
Facebook is more punitive in its reaction to those who spread misinformation than Twitter. The platform includes penalties for serial offenders such as warnings that content they are sharing may be misleading, notifications to visitors of the page that they repeatedly share misinformation, and reducing the distribution of misleading posts by up to 95%. Facebook also deploys centralised hubs in response to events that may generate a large amount of misinformation. These “Information Centres” contain fact-checked messaging and were rolled out for the US 2020 federal election, climate science and COVID-19. Facebook also contributes to education in the journalism sector through the Facebook Journalism Project and News Integrity Initiative.
Aside from the big two, we reviewed the misinformation policies of Instagram, LinkedIn, Snapchat, TikTok and YouTube. Instagram, owned by Facebook, uses much the same methods to counter misinformation, employing fact-checkers, labelling misleading posts and reducing their distribution. LinkedIn offers only the option to report misleading content, which may then be reviewed and removed from the platform. YouTube, TikTok and Snapchat point to their Community Guidelines, which prohibit the spread of misinformation and promote its removal. However, how platforms identify and categorise this content is only loosely defined.
Are platforms successfully stopping misinformation?
The short answer, not really. Many social media platforms have ineffective monitoring and management of misinformation on their platforms. Platforms such as YouTube — Australia’s second most-used social media platform — and youth-focused TikTok’s reliance on Community Guidelines and community policing leave them vulnerable to misinformation spreading unchecked across their platforms. This was evidenced by the popularity of TikToker Jon-Bernard Kairouz’s COVID case number “predictions” earlier this year, which drew condemnation from the NSW Government yet did little to dampen their spread.
Although Facebook possesses the most robust misinformation management framework, the platform’s vulnerability to misinformation remains an issue. Research conducted this year by New York University found that posts from far-right news sources, known to spread misinformation, garnered six times more engagements than comparable posts from reputable news sources. Although misinformation may not spread as widely on Facebook, it is clear this does not mean it is less believed.
There is a long way to go before we stop misinformation from spreading on social media. But, awareness of the issue is growing, as is our understanding of the actions we can all take to help stop it. The silver lining of the COVID-19 pandemic has been an increasing public understanding of the issue. It has seen the introduction of several new policies across social media platforms and mounting pressure on organisations to do more. We are now seeing governments begin to take notice and become more involved in social media management, a trend that is likely to continue over the next decade. What is certain about misinformation is that we will see more action taken over the coming years, and organisations that fail to do so will be left behind.