Right-leaning public figures dominate the online hate speech scene, researchers reveal.

Researchers from the University of Technology Sydney (UTS) have introduced an innovative machine learning model capable of detecting hate speech on social media more accurately than existing methods. 

This development could help reduce the spread of harmful online content, which often targets marginalised groups and contributes to social divisions.

The model, known as Multi-Task Learning (MTL), is designed to improve the consistency of hate speech detection across different datasets. 

It was trained on eight datasets collected from platforms like Twitter (now X), Reddit, Gab, and the neo-Nazi forum Stormfront. 

The researchers tested the model on a unique dataset comprising 300,000 tweets from 15 American public figures, including former presidents, conservative politicians, far-right conspiracy theorists, media commentators, and left-leaning representatives perceived as highly progressive. 

The results were stark: of the 5,299 abusive posts detected, 5,093 originated from right-leaning figures, often featuring misogynistic and Islamophobic content.

Unlike traditional models that are trained and tested on the same dataset, the MTL model’s approach involves training on one dataset while being tested on another, enabling it to generalise better across diverse data.

Associate Professor Marian-Andrei Rizoiu, Head of the Behavioural Data Science Lab at UTS, described this model as a potential tool in combating online abuse. 

He said it is able to adapt to various forms of hate speech, including racism, sexism, and incitement to violence, making it more effective than current models. 

“Automatic identification of hateful and abusive content is vital in combating the spread of harmful content and preventing its damaging effects,” Rizoiu said.

The findings align with broader discussions about online hate speech, which, according to Rizoiu; “lies on a continuum with offensive speech and other abusive content such as bullying and harassment”. 

The model was found to effectively distinguish between abusive speech and hate speech, also identifying prevalent topics like ethnicity, women, Islam, and immigration.

The implications of this study are significant, as hate speech has the potential to weaken democratic institutions, polarise society, and even incite real-world violence. 

By providing more accurate tools to identify and manage such content, the researchers hope the MTL model could play a role in mitigating online harms and fostering safer digital spaces.

More details are accessible here. 

This email address is being protected from spambots. You need JavaScript enabled to view it. CareerSpot News