November 23, 2024
Twitter Submitted Incomplete Report on Efforts Against Disinformation: EU
Twitter did not provide a full report to the European Union (EU) on the efforts taken by the microblogging platform to combat disinformation and fake news, according to officials. Twitter said it's "making real advancements across the board" at fighting disinformation in its baseline report, but lagged behind firms like Google, TikTok, Microsoft as well as Facebook an...

Twitter failed to provide a full report to the European Union on its efforts to combat online disinformation, drawing a rebuke Thursday from top officials of the 27-nation bloc.

The company signed up to the EU’s voluntary 2022 Code of Practice on Disinformation last year — before billionaire Tesla CEO Elon Musk bought the social media platform.

All who signed up to the code, including online platforms, ad-tech companies and civil society, agreed to commit to measures aimed at reducing disinformation. They filed their first “baseline” reports last month showing how they’re living up to their promises.

Google, TikTok, Microsoft as well as Facebook and Instagram parent Meta showed “strong commitment to the reporting,” providing unprecedented detail about how they’re putting into action their pledges to fight false information, according to the European Commission, the EU’s executive arm. Twitter, however, “provided little specific information and no targeted data,” it said.

“I am disappointed to see that Twitter report lags behind others and I expect a more serious commitment to their obligations stemming from the Code,” Vera Jourova, the commission’s executive vice president for values and transparency, said in a statement. “ Russia is engaged also in a full-blown disinformation war and the platforms need to live up to their responsibilities.”

In its baseline report, Twitter said it’s “making real advancements across the board” at fighting disinformation. The document came in at 79 pages, at least half the length of those filed by Google, Meta, Microsoft, and TikTok.

Twitter did not respond to a request for further comment. The social media company’s press office was shut down and its communications team laid off after Musk bought it last year. Others whose job it was to keep harmful information off the platform have been laid off or quit.

EU leaders have grown alarmed about fake information thriving on online platforms, especially about the COVID-19 pandemic and Russian propaganda amid the war in Ukraine. Last year, the code was strengthened by connecting it with the upcoming Digital Services Act, new rules aimed at getting Big Tech companies to clean up their platforms or face big fines.

But there are concerns about what shows up on Twitter after Musk ended enforcement of its policy against COVID-19 misinformation and other moves such as dissolving its Trust and Safety Council that advised on problems like hate speech and other harmful content.

An EU evaluation done last spring before Musk bought Twitter and released in November found the platform took longer to review hateful content and removed less of it in 2022 compared with the previous year. Most other tech companies signed up to the voluntary code also scored worse.

Those signed up to the EU code have to fill out a checklist to measure their work on fighting disinformation, covering efforts to prevent fake news purveyors from benefiting from advertising revenue; the number of political ads labelled or rejected; examples of manipulative behaviour such as fake accounts; and information on the impact of fact-checking.

Twitter’s report was “short of data, with no information on commitments to empower the fact-checking community,” the commission said.

Thierry Breton, the commissioner overseeing digital policy, said it’s “no surprise that the degree of quality” in the reports varies greatly, without mentioning Twitter.

The commission highlighted other tech companies’ actions for praise. Google’s report indicated that it prevented more than EUR 13 million (roughly Rs. 115 crore) of advertising revenue from reaching disinformation actors, while TikTok’s report said it removed more than 800,000 fake accounts.

Meta said in its filing that it applied 28 million fact-checking labels on Facebook and 1.7 million on Instagram. Data indicated that a quarter of Facebook users and 38 percent of Instagram users don’t forward posts after seeing warnings that the content has been flagged as false by fact-checkers.


Affiliate links may be automatically generated – see our ethics statement for details.