In an extensive analysis led by Daniel Hickey of the University of California, Berkeley, researchers have presented alarming revelations concerning the prevalence of hate speech on the social media platform X, previously known as Twitter, during Elon Musk’s presidency. Following Musk’s acquisition of the platform on October 27, 2022, the study, which was published in the open-access journal PLOS One on February 12, 2025, indicates that the frequency of hate speech surged dramatically, increasing by approximately fifty percent in the months following the takeover. This continued persistence of hate speech during Musk’s leadership calls into question the efficacy of moderation policies aimed at curtailing harmful content.
The researchers’ focus on the stark rise in hate speech is backed by a rigorous methodological framework, employing established techniques to gauge the levels of English-language hate speech on X. During the analytical timeframe, which extended until June 2023, they discovered that spikes in hate speech, particularly prior to Musk’s acquisition, were not only sustained but also exacerbated. This trend reflected notable increases in specific types of slurs, particularly those rooted in homophobia, transphobia, and racism. The weekly hate speech rates remained significantly elevated, indicating an alarming landscape for social media discourse.
In detailing the findings, the researchers noted that the average number of "likes" associated with posts containing hate speech surged by an astonishing seventy percent, highlighting a worrying trend where more individuals were engaging with, and being exposed to, such harmful content. Such metrics reveal a concerning shift in user engagement patterns on the platform, fostering an environment where hate speech flourishes. Importantly, these developments run counter to Musk’s public declarations that aimed to reduce inauthentic activity and the harmful impact of bots on the platform, suggesting a systemic disengagement from the commitments he underscored shortly after taking control.
Prior research has established a connection between online hate speech and offline hate crimes, illuminating the potential consequences of rampant hate speech within digital spaces. The inability to curb the influence of bots and bot-like accounts is particularly disconcerting, particularly as these entities are known to exacerbate the spread of misinformation and spam. This oversight could compound the potential for societal harm, with implications that extend beyond the immediate realm of social media interactions.
Despite Musk’s pledge to diminish bot presence on X, the analysis reveals that the number of bot and other inauthentic accounts did not see a decline; in fact, there are indications that their presence may have grown. This stark inconsistency between claimed intentions and observed reality raises critical questions about the authenticity of moderation efforts under Musk’s leadership. The researchers emphasize the necessity for enhanced monitoring and regulatory intervention within social media platforms, underscoring that existing policies currently fail to effectively manage exposure to harmful content.
The implications of this analysis are far-reaching, prompting critical discussions about the role of social media companies in safeguarding user experiences and public discourse. Critics argue that platforms like X must take immediate action to improve moderation practices and protect users from exposure to hate speech and misinformation. The study also highlights the need for further research to explore the dynamics of hate speech across various social media platforms and the effectiveness of interventions.
Moreover, the continued increase in hate speech and the persistent challenge posed by bot activity necessitate a deeper understanding of how societal norms and values are reflected and refracted in digital spaces. This research initiative draws attention to the urgent need for legislative and societal interventions aimed at both technological and human behavioral aspects of online interactions.
As digital platforms continue to grow in influence, the call for responsible governance in monitoring hateful or misleading speech becomes paramount. Stakeholders from various sectors—including policymakers, researchers, and tech companies—must come together to formulate tangible solutions to address the toxic culture permeating social media, characterized by rampant hate speech and the proliferation of misinformation.
In conclusion, Hickey et al.’s findings provide an unsettling snapshot of the current state of X under Musk’s leadership, bringing to light critical issues surrounding hate speech, moderation, and the enduring presence of bots. Acknowledgment of these findings may catalyze further discussions regarding policy reform and the imperative for improved digital hygiene in the interconnected world. As we navigate an era where online platforms increasingly shape societal dialogues, it is essential to heed this research and advocate for a healthier digital ecosystem.
Subject of Research: People
Article Title: X under Musk’s leadership: Substantial hate and no reduction in inauthentic activity
News Publication Date: 12-Feb-2025
Web References: PLOS One
References: Hickey D, Fessler DMT, Lerman K, Burghardt K (2025) X under Musk’s leadership: Substantial hate and no reduction in inauthentic activity. PLoS ONE 20(2): e0313293.
Image Credits: Credit: Hickey et al., 2025, PLOS One, CC-BY 4.0
Keywords: Hate Speech, Social Media, Elon Musk, X, Bots, Digital Moderation, Misinformation, Public Health, Online Discourse, Policy Reform.