Data collected by CyberWell found that though only 2 percent of anti-Semitism content on social media platforms in 2022 was violent, 90 percent of that came from Twitter. And Cohen Montemayor notes that even the company’s standard moderation systems would likely have struggled under the strain of so much hateful content. “If you’re experiencing surges [of online hate speech] and you have changed nothing in the infrastructure of content moderation, that means you’re leaving more hate speech on the platform,” she says.
Civil society organizations that used to have a direct line to Twitter’s moderation and policy teams have struggled to raise their concerns, says Isedua Oribhabor, business and human rights lead at Access Now. “We’ve seen failure in those respects of the platform to actually moderate properly and to provide the services in the way that it used to for its users,” she says.
Daniel Hickey, a visiting scholar at the USC’s Information Sciences Institute and coauthor of the paper, says that Twitter’s lack of transparency makes it hard to assess whether there was simply more hate speech on the platform, or whether the company made substantive changes to its policies after Musk’s takeover. “It is quite difficult to disentangle often because Twitter is not going to be fully transparent about these types of things,” he says.
That lack of transparency is likely to get worse. Twitter announced in February that it wouldfree access to its AP—the tool that allows academics and researchers to download and interact with the platform’s data. “For researchers who want to get a more extended view of how hate speech is changing, as Elon Musk is leading the company for longer and longer, that is certainly much more difficult now,” says Hickey.
In the months since Musk took over Twitter, major public news outlets like National Public Radio, Canadian Broadcasting Company, and other public media outlets have left the platform after being labeled as “state-sponsored,” a designation that was formerly only used for Russian, Chinese, and Iranian state media. Yesterday, Musk reportedly.
Meanwhile, actual state-sponsored media appears to be thriving on Twitter. An Aprilfrom the Atlantic Council’s Digital Forensic Research Lab found that, after Twitter stopped suppressing these accounts, they gained tens of thousands of new followers.
In December, accounts that had beenwere allowed back on the platform, including right-wing academic Jordan Peterson and prominent misogynist Andrew Tate, who was later arrested in Romania for human trafficking. Liz Crokin, a proponent of the QAnon and Pizzagate conspiracy theories, was also reinstated under Musk’s leadership. On , Crokin alleged—falsely—in a Tweet that talk show host Jimmy tweet had featured a pedophile symbol in a skit on his show.
Recent changes to Twitter’s verification system, Twitter Blue, where users can pay to get blue check marks and more prominence on the platform, has also contributed to the chaos. In November, a tweet from apretending to be corporate giant Eli Lilly announced that insulin was free. The tweet caused the company’s stock to dip almost 5 percent. But Ahmed says the implications for the pay-to-play verification are much starker.
“Our analysis showed that Twitter Blue was being weaponized, particularly being taken up by people who were spreading disinformation,” says CCDH’s Ahmed. “Scientists, journalists they’re finding themselves in an incredibly hostile environment in which their information is not achieving the reach that is enjoyed by bad actors spreading disinformation and hate.”
Despite Twitter’s protestations, says Ahmed, the study validates what many civil society organizations have been saying for months. “Twitter’s strategy in response to all this massive data from different organizations showing that things were getting worse was to gaslight us and say, ‘No, we’ve got data that shows the opposite.’”