In an era where social media platforms serve as critical arenas for public discourse, the prevalence and patterns of online hate speech present significant challenges to societal cohesion and democratic values. A groundbreaking study delves deeply into the dynamics of hate speech within Portuguese social media, analyzing two major platforms—YouTube and Twitter (now X)—to unravel the complex interplay of linguistic, social-psychological, and rhetorical elements that drive hateful communication online.
The research focuses on annotated corpora comprising tens of thousands of comments identified as relevant hate speech by expert annotators. On YouTube, nearly four out of five comments (79%) in the dataset were flagged as relevant, with a substantial proportion (16,128 comments) explicitly classified as containing hate speech. In stark contrast, the Twitter/X corpus presented a more filtered scenario, where only about a quarter of observations were relevant, culminating in 5,624 relevant tweets, of which 2,621 embodied hate speech content. This disparity underscores not only the platform-specific nature of discourse but also reflects differences in user interaction, moderation policies, and cultural contours of expression.
One of the most striking revelations from the study is the predominance of indirect hate speech over direct forms across both social networks. Indirect hate speech accounted for 68% of occurrences on YouTube and surged to 79% on Twitter/X, indicating a nuanced approach by users who camouflage hostility in subtler forms. Crucially, the study acknowledges that these categories are not mutually exclusive; a single comment often interweaves direct and indirect hateful content, amplifying its pernicious impact. An illustrative example from YouTube involves a comment that simultaneously employs overt derogatory labels and references to historical atrocities, weaving direct slurs with implied violent threats—a multifaceted form of hatred aimed at racialized communities.
The study meticulously categorizes hate speech manifestations, revealing that outgroup derogation is the dominant mechanism of discrimination in both direct and indirect forms. This derogation articulates itself through biting ridicule and the reinforcement of pervasive negative stereotypes, effectively perpetuating societal biases against marginalized groups. Beyond general disparagement, the linguistic fabric of hate speech leverages stereotypes as a leitmotif, painting targeted communities—such as Roma or LGBTI+ groups—with broad, damaging brushstrokes depicting morally and legally reprehensible behaviors. These stereotypes, in turn, catalyze the stigmatization and dehumanization of entire communities, reinforcing cycles of exclusion.
An essential insight emerges from the differential expression of threats within hate speech. Symbolic threats—such as challenges to cultural heritage and collective identity—dominate on YouTube, resonating with users invested in a nostalgic or mythologized vision of national identity. Conversely, Twitter/X harbors a prevalence of realistic threats emphasizing tangible social and economic fears, including accusations of resource exploitation and social parasitism, which are often targeted at racialized and Roma communities. This divergence highlights the role of platform affordances and user demographics in shaping the tenor and targets of hate speech.
The study also delineates the nuanced use of discursive strategies separating direct and indirect hate speech. Direct hate frequently utilizes dehumanization, starkly equating entire communities to animals or vermin. This brutal rhetorical device strips the target of their humanity, facilitating overt aggression and violence. In contrast, indirect hate speech tends to invoke denialism and role reversal, portraying majority groups as the new victims and framing accusations of racism or homophobia as baseless hysteria. Such tactics not only minimize the hurt inflicted on marginalized groups but also serve to justify ongoing discrimination under the guise of being marginalized themselves.
Rhetorical techniques play a pivotal role in shaping the tone and impact of hateful commentary. Verbal irony emerges as a dominant device across platforms and hate speech types, softening the explicitness of aggression while simultaneously reinforcing exclusionary stereotypes. Holding a veneer of humor or sarcasm, such irony skillfully masks hostility and enables the propagation of hateful ideas under socially palatable forms. Accompanying the irony, “call to action” fallacies incite tangible harm, urging direct violent measures against targeted groups. These explicit calls are alarming, given their potential to escalate from online vitriol to real-world violence.
Emotionally, the fabric of hate speech is woven predominantly from feelings of hate and anger, though their distribution differs by the form of hate speech. Direct hate speech is most closely tied to explicit hatred, intensifying antagonism towards target communities. Indirect hate speech, by contrast, exhibits a higher prevalence of anger—an emotion channeling frustration, fear, and resentment that fuels more covert expressions of hostility. Emotions thus color the communication and reception of hate discourse, facilitating different social and psychological mobilizations depending on context.
The study further explores the specificity of hate speech by target community and platform. Racialized communities bear the brunt of hate speech on YouTube, while comments targeting the LGBTI+ community predominate on Twitter/X. This finding underscores how platform cultures and user bases interact with societal prejudices, influencing which groups become focal points for hostility. Though data limitations caution overinterpretation, these trends spotlight critical areas for moderation and intervention.
Remarkably, the psychological and linguistic mechanisms of hate speech manifest with similar frequencies across diverse target communities. Outgroup derogation and negative stereotyping recurrence is almost universal, indicating a shared structural backbone underlying hate directed at varied social groups. However, subtle differences emerge in the deployment of symbolic and realistic threats, hyperbole, and rhetorical devices like verbal irony, with intensity and form modulated by both platform and specific targets. These layers of complexity challenge one-size-fits-all models of online hate intervention.
The dynamics of hate speech are further illuminated through the study’s use of Phi coefficients to quantify associations between discursive strategies, emotions, rhetorical devices, and fallacies within the YouTube corpus. Though most correlations are statistically weak, certain moderate associations stand out: metaphors tend to co-occur with dehumanization, hyperbole aligns with stereotypes and role reversal, and appeals to fear link closely with perceived threats and emotional fear. These connections reveal an intricate web of linguistic and psychological tactics reinforcing hateful messaging, emphasizing the integrated nature of online hate speech construction.
This multifaceted research not only expands academic understanding but carries urgent practical implications. The intricate interlinkages between how hateful rhetoric is framed, targeted, and emotionally charged complicate efforts for content moderation and policy design. By revealing the platform-specific variances and highlighting the nuanced coexistence of direct and indirect expressions of hate, the findings suggest that effective countermeasures must be adaptable, linguistically sophisticated, and context-sensitive to dismantle the social underpinnings of online hate.
Moreover, this work spotlights the critical necessity of monitoring the symbolic elements of hate speech, which often evade detection due to their indirectness and layered implications. The glorification of historical violence and normalizing of extremist views, as illustrated in user comments, indicate how deep-rooted and socially embedded hate speech is within online communities. Efforts to combat such discourse demand more than keyword filtering; they require comprehensive, multidisciplinary strategies informed by social psychology, linguistics, and digital communication studies.
Ultimately, the study serves as both a clarion call and a roadmap for researchers, policymakers, and platform designers seeking to navigate the complex landscape of hate speech in the digital age. Its meticulous analysis and rich exemplification of hateful discourse in Portuguese social media extend beyond a regional focus, offering paradigms applicable globally as societies grapple with the balance between freedom of expression and the imperative to protect vulnerable communities from online harm. Understanding the anatomy and emotional architecture of hate speech is the first step toward crafting more humane, inclusive digital public spheres.
Subject of Research: The study investigates the nature, mechanisms, and manifestations of online hate speech within Portuguese social media, examining linguistic, social-psychological, rhetorical, and emotional dimensions across YouTube and Twitter/X platforms.
Article Title: Unpacking online hate speech in Portuguese social media: a social-psychological and linguistic-discursive approach.
Article References:
Guerra, R., Carvalho, P., Marques, C. et al. Unpacking online hate speech in Portuguese social media: a social-psychological and linguistic-discursive approach. Humanit Soc Sci Commun 12, 1709 (2025). https://doi.org/10.1057/s41599-025-05392-9
Image Credits: AI Generated

