ao link
Charity Digital
Search
User Menu
Remember Login

New to Charity Digital?

User Menu
Remember Login

New to Charity Digital?

My Account
Remember Login

New to Charity Digital?

Search

What is online hate and how can you combat it?

We explore the rise of misinformation and disinformation, examine forms of regulation and self-regulation around hate speech, and provide simple steps to help you combat online hate

A grey cartoon man with words such as 'bullying' and 'hate' coming from its mouth, with a red star against a blue background
What is online hate and how can you combat it?

Digital or online hate involves language or actions that target a person or group of people in the virtual space. Online hate takes various forms, including harassment, cyberbullying, online threats, doxxing, and the spread of hateful content targeting race, gender, religion, sexuality, disability, or other aspects of identity. Online hate often occurs in several virtual spaces, including social media platforms, forums, online messaging platforms, comment sections, and so on.

 

Stop Hate explain why online hate feels particularly pernicious. It has a level of permanence not often found in offline hate, as the hate usually remains on the online space indefinitely. Online hate can spread quickly across the web, reaching a large audience, and often remains anonymous, leading to a general absence of accountability. Online hate is a growing problem, with many virtual platforms reducing safeguarding that once aimed to protect digital citizens.

 

In this article, we will explore the consequences of online hate, the rise of hate in the age of artificial intelligence (AI), the self-regulation and legal regulation of online hate speech, the charities working to prevent online hate, and the steps your organisation can take to combat online hate.

 

Skip to: The extent and spread of online hate

Skip to: The consequences of online hate

Skip to: The rising tide of online hate

Skip to: Online hate in the age of artificial intelligence

Skip to: The regulation around online hate

Skip to: Steps you can take to counter online hate

Skip to: Charities working to prevent online hate

 

 

The extent and spread of online hate

 

Online hate is widespread. According to Statista, two in three people frequently encounter online hate in virtual spaces. An ADL study says half of US teenagers experienced online hate in the past 12 months, with Facebook and Instagram cited as platforms on which harassment most often occurs. Three out of five teens are worried about being harassed, threatened, or targeted online. An Ofcom report found that, in the UK, half of 12- to 15-year-olds had seen hateful content online.

 

Anyone can become a victim of online hate, but online hate often targets marginalised and minoritised people and communities. Online hate informs the rise of misinformation and the rise of misinformation intensifies online hate. Fake news stories can portray certain groups as dangerous or threatening, stoking political hostility, and creating fake statistics to reinforce prejudices. Conspiracy theories proliferated online create fear and resentment, often targeting specific groups.

 

 

The consequences of online hate

 

Online hate has real-world consequences. Recipients report feeling anxious, depressed, and excluded. According to a report from Ofcom, individuals feel embarrassment, shame, hopelessness, and exhaustion, and many retreat from the online world, self-censoring their output or refusing to participate. That’s why, as we’ve mentioned previously, online hate is an issue of digital exclusion, preventing fair and meaningful participation in the online space.

 

And online hate, according to a recent scientific paper, has more serious consequences. In Europe, online hate has played a role in an assassination of a politician, for example, and forced innocent communities to flee for safety. Online hate has led to a rise in self-harm and suicide among younger people. A study by academics from Cardiff University’s HateLab found a direct correlation between online hate and physical crimes against minorities. The study mirrors a similar study in Germany from 2020 and another study by New York University, which showed links between online hate and offline violence in 100 cities.

 

 

The rising tide of online hate

 

And online hate is worsening, largely because online platforms have minimised activity around combatting hate. Elon Musk’s takeover of Twitter served to change the landscape of social media platforms’ self-regulation. Twitter, now X, once aimed to safeguard users against misinformation and online hate at a large scale, but has now changed its approach. X decided to remove some of the safeguards that prevented the spread of misinformation. The platform reinstated banned accounts, for example, and removed blue ticks from authoritative sources. The platform laid off an entire team dedicated to content moderation. And Musk himself has engaged with misleading and arguably hateful content. A 2025 study found a spike in hate speech on X around the time of Musk’s takeover, substantiating many other studies.

 

But Musk is not alone. Other platforms, including Instagram, Facebook, and YouTube, followed X’s lead: many of them introduced paid verification of accounts, revised methods of content moderation, laid-off staff responsible for moderation, and so on. As Nora Benavidez, senior counsel at Free Press, claims: “The deluge of fake, hateful, and violent content on social media platforms is about to go from very bad to worse…Big tech executives Elon Musk and Mark Zuckerberg have made reckless decisions to maximise their profits and minimise their accountability.”

 

And the election of Donald Trump will likely lead to an increase in online hate, not just in the US but around the world. Trump has long used new media to push a self-serving narrative, with little regard for the truth. In early 2025, for example, Trump claimed that Ukraine “started” the conflict with Russia and made completely unsubstantiated claims about the rise in autism. Trump’s re-election campaign included support from some of the so-called Broligarchs who have minimised social media moderation and undermined diversity policies in their companies.  And social media companies are mostly based in the US, which means they are shaped more by the politics of the US. In the simplest terms, Trump’s re-election creates an environment in which online hate can thrive.

 

 

Online hate in the age of artificial intelligence

 

The rise of generative artificial intelligence (AI) may further exacerbate online hate. Generative AI, despite its benefits, makes misinformation easy. It allows the creation of deepfakes: highly realistic images, videos, and audio that fabricate events or impersonate individuals. Deepfakes are spread on social media – and people believe them. One famous example was shared across social media in 2022: Ukrainian President Volodymyr Zelensky asking his army to surrender. Voice manipulation enables users to manufacture fake audio to use another person’s voice and likeness to disseminate false information. Hate groups also use AI-dependent bot networks  to spread online hate: posting, liking, sharing, and commenting on false or misleading posts to game the algorithm and amplify the message.

 

AI-generated misinformation has real-world consequences. In 2024, a fatal attack at a dance class in Southport was followed by riots across the UK, fuelled in part by online misinformation and hate speech. Various accounts on X falsely speculated that the suspect was a “Muslim immigrant” and an “asylum seeker,” and circulated an incorrect name for the suspect. Some posts were accompanied by Islamophobic and racist hate. One account used potentially AI-generated imagery and text to mimic a trusted news site and repeated false claims, which many users shared. The post was viewed two million times before it was taken down – and the claims made were amplified by many controversial influencers, some of whom were once banned from Twitter for spreading misinformation.

 

We have long talked about the power of AI, but that power has been yielded to harm others. And, as we’ll see, the regulation meant to tackle such use cases has been thus far ineffective.

 

 

The regulation around online hate

 

Hate speech refers to speech that incites violence against people of a particular race, religion, sexual orientation, gender identity, and other similar grounds. Many countries around the world prohibit the use of hate speech. The UK, for example, prohibits hate speech, with the Crown Prosecution Service deciding on prosecution for each case. Online hate is harder to regulate because users often practice anonymity, and online hate can prove harder to prove. But, in theory, online hate is subject to the same regulations as offline hate speech, including the Malicious Communications Act 1988, Public Order Act 1986, the Terrorism Act 2006, among others. New regulation, the Online Safety Act, aims to make social media and search engines do more to protect people, particularly children, from harmful content. The Act creates new offences, such as “cyber flashing” and the sharing of “deepfake” pornography. Ofcom has been given additional powers to ensure companies comply.

 

But many have criticised the Online Safety Act as insufficient, calling for tougher rules and a duty of care on tech firms. The Act does not effectively address the rise of misinformation, with the core focus on the prevention of “harmful” content seeming too vague, too open to interpretation. Senior politicians, calling for more robust action against online hate, suggest the Act is not fit-for-purpose. The Act provides a clear example of how governments are struggling to regulate the online space, largely because they’re not moving fast enough. The Act, first introduced in 2022 and still not fully implemented, makes only one vague reference to AI.

 

To effectively regulate online hate, the UK must go further, placing the onus on the companies hosting misinformation. At present, countries largely encourage social media platforms to self-regulate and, as we have seen, that gives companies license to stop moderating content on their own platforms. A stronger legal framework and clearer enforcement mechanisms seems necessary to tackling online hate speech, alongside calls for stronger platform guidelines and greater digital literacy.

 

 

Steps you can take to combat online hate

 

With the absence of effective regulation and only minimal self-regulation from platforms, the onus presently falls on individuals and organisations to combat online hate. So below we look at some of the ways that your organisation can combat online hate.

 

 

Improve your digital and media literacy

 

Digital and media literacy have become more essential in recent years. And, as part of that training, staff and volunteers should learn how to combat misinformation and online hate. The first element of training must be awareness, helping individuals to locate and identify forms of misinformation and hate speech. According to First Draft, you can prevent misinformation and online hate by employing healthy scepticism, and adopting a heightened awareness towards the effects of misinformation. Plenty of digital and media training is available online.

As part of the digital and media training, show support to any employees working on the organisation’s social media accounts. As we’ve previously explored, people who work on social media are often disproportionately subjected to online hate and trolling. That can have a huge impact on their mental health, so organisations should take steps to support these employees.

 

 

Teach your employees how to “pre-bunk”

 

Prevention is better than cure when it comes to misinformation and online hate, according to First Draft. The problem is that misinformation continues to have an influence, even after it’s been corrected. Consider some of the examples used above: even after fact-checkers and reputable sites highlighted false narratives, millions had already consumed such narratives and continued to believe they were true, despite debunking efforts.

 

“Pre-bunking” provides a strategy to get ahead of the misinformation, allowing you to spot and refute a misleading claim at the earliest possible stage. Users can usually predict a rise in misinformation and online hate, according to a guide from the University of Cambridge, BBC Media Action, and Jigsaw. Misinformation occurs more during election cycles, health crises, and environmental disasters, so be prepared to kick into action and tackle any false claims you notice during this period.

 

Charities and reputable media organisation are well-placed to “pre-bunk”. That’s because they are trusted by their audience and often possess expertise on given subjects. The best way to “pre-bunk” is through active monitoring, perhaps using RSS feeds or creating bespoke searches on the main social platforms, finding any examples of online hate or misinformation. Then use expertise to respond with the correct information, preferably citing factual sources.

 

Full Fact say refuting false information depends on three steps: stating information is wrong, explaining why it’s wrong, and stating what is true. The approach works because people need a new story to replace the old inaccurate story they previously believed. For more information, see the practical guide to combatting misinformation through pre-bunking.

 

 

Report inaccurate or hateful information

 

People often believe that the online space is abstracted and absent of real-world consequences. But, as we’ve shown, online hate leads to very real ramifications. So posts that are hateful and harmful, posts that come under the banner of hate speech, absolutely should be reported.

 

Users should always gather evidence in the first instance, as the evidence will help with any investigation whether you report the online hate to platforms, the police, or some of the other organisations we’ll mention below. So take screenshots, save webpage links, and so on.

 

Stop Hate explain the options then available to users. They suggest, as a first step, utilising the ‘Report’ functions on the platforms available. That may lead to the removal of a post, suspension of an account, or even closing down of the account if the post breaches platform guidelines. One issue with that approach, as seen in recent months, is that guidelines across major platforms have become less stringent, with even obvious online hate apparently not breaching rules.

 

If you think the post is criminal, users can report the post to the police for a criminal investigation. The police typically follow up on online hate acts, gather evidence, then refer the crime to the Crown Prosecution Service. If you are unsure whether the incident broke the law – and it can be tricky – you can contact Stop Hate directly and they’ll offer information, advice, and support. They are happy to take anonymous reports if you do not wish to share personal information.

 

 

Charities working to prevent online hate

 

So many charities work hard to prevent online hate. Glitch’s purpose is ending online abuse and championing digital citizenship. Digital citizenship is the right for all individuals to engage in online spaces safely and freely without discrimination. They are campaigning for a so-called “Tech Tax”, in which tech giants are taxed by 10% to be spent on ending online abuse. Glitch offer digital self-care and self-defence training for activists, who have a higher likelihood of receiving online abuse, as well as providing resources for people who have experienced online abuse.

 

Barnardo’s are campaigning for the UK government to conduct research on the harm social media has on mental health. They are advocating for social media education for children and the general regulation of the internet to address dangers like cyber-bullying. Other charities such as Women’s Aid and The Proud Trust provide support and advice for people experiencing online violence.

 

Mind have resources for those experiencing online hate, as well as information on mental health and staying safe online. PAPYRUS, a national charity dedicated to the prevention of young suicide in the UK, have adopted four core approaches to help build a safer and more inclusive internet: reporting harmful content, supporting people at risk through their helpline, raising awareness, and building a supportive community.

 

Stop Hate, who we’ve referenced several times in the present article, works to challenge all forms of hate crime, online and offline. And the Center for Countering Digital Hate performs vital work to stop the spread of online hate and disinformation via their research, campaigns, and policy advocacy. Their vital work aims to hold big tech platforms accountable for the harm they cause.

 


Related Articles

Best online courses for the charity sectorBest online courses for the charity sector
How to make AI serve marginalised peopleHow to make AI serve marginalised people

Ioan Marc Jones

Ioan Marc Jones

Ioan Marc Jones

More on this topic
Recommended Products
Acronis Cybersecurity Solutions for Nonprofits

Acronis Cybersecurity Solutions for Nonprofits

More on this topic

A guide to buying refurbished computers for your charity

A guide to buying refurbished computers for your charitySponsored Article

Charity Digital Academy

Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.

 

Tell me more

Recite Me toolbar