Hate Speech Online: What It is and How to protect yourself

Hate speech online affects many people. Especially on social media platforms, users are increasingly confronted with derogatory or hostile hate comments. What is hate speech, and how can individuals effectively defend themselves against it?

What is Hate Speech Online - and Does it Apply to Your Situation?

Hate speech is a form of communication that seeks to incite hatred, discriminate against, or dehumanise individuals or groups based on characteristics such as ethnicity, national origin, religion, gender, disability, or sexual orientation. At its core, hate speech is rooted in the idea that certain groups are less valuable or deserve fewer rights.

Hate speech is a growing problem online. On social media in particular, environments can develop where hateful content becomes normalised, encouraging discriminatory or even violent behaviour. These dynamics do not remain online — they can shape real-world attitudes and contribute to hostility and violence offline.

At the same time, it is not always easy to determine whether content should be removed. There is often a tension between limiting hate speech and protecting freedom of expression.

This page is for you if you are in one of the following situations:

 

1. You were targeted by hate speech.

 You encountered content that attacked you or a group you belong to based on an identity characteristic — and the platform did not act on your report or reversed a removal you requested.

2. Your content was removed for alleged hate speech.

The platform removed your post, restricted your account, or penalised you for content classified as hate speech — and you believe this decision was incorrect.

 

In both cases, User Rights can review the platform's decision independently and free of charge.

Examples of Hate Speech

  • Dehumanising language or comparison to animals: Individuals or groups from marginalised communities are degraded by being compared to animals or described as subhuman (e.g. comparing people with Black skin to apes).
  • Generalising negative characteristics: Entire groups are portrayed as criminal, dangerous, or inferior based on identity (e.g. “All young migrants are rapists and sex offenders.”).
  • Claims of superiority: A person presents themselves or their group as superior while making derogatory or discriminatory statements about a marginalised community (e.g. sexist insults or discrimination against members of the LGBTQIA+ community).
  • Trivialising or endorsing violence against protected groups: Acts of violence, persecution, or historical atrocities against protected groups are downplayed, justified, glorified, or mocked (e.g. trivialising the Holocaust).

Note: Hate speech is distinct from harassment or cyberbullying, which typically target individuals regardless of theiridentity. If this better describes your situation, you can find more information on our page about online harassment.

What Rules and Laws Apply to Hate Speech Online?

Platform community guidelines

User Rights' dispute settlement process covers different social media platforms, such as TikTok, Instagram, Facebook, LinkedIn or Pinterest. If you have reported hate speech to one of these platforms and are not satisfied with the outcome, you can submit your complaint to us.

Each platform has its own rules against hate speech, which apply independently of national law. This means platforms may remove or restrict content that violates their guidelines – even if that content is not illegal in your country. These rules evolve over time, so we recommend checking each platform's current guidelines directly.

National laws

Depending on your location, hate speech may also be regulated under national law. This can include criminal offences such as incitement to hatred, insult, or Holocaust denial, as well as civil and anti-discrimination law. 

User Rights currently reviews complaints only under German and Italian criminal law; civil and anti-discrimination claims fall outside our scope. Before submitting a complaint, please check which national criminal law provisions fall within our scope of review.

 → See our scope of review

What Can You Do About Hate Speech – and How Can User Rights Help?

1. If you are accused of hate speech

Has your content or account been removed or restricted due to alleged hate speech? You don't need to go through the reporting process below. You can bring your complaint directly to User Rights. 


2. If you are a victim of or witness to hate speech

Step 1:  Report the content

Report the content directly on the platform where it appears. In the EU, social media platforms are legally required to provide a reporting mechanism and respond with a decision and an explanation.

  • Use the report button on the specific post, comment, video, or profile
  • Keep a record: take screenshots before reporting, including the URL, date, and any identifying details about the content
  • Allow up to 7 days for the platform's response — they are required to provide one

Step 2: Seek support (if needed)

You do not have to deal with hate speech alone. You can seek support from:

  • Lawyers specialising in media and freedom of expression law
  • Organisations such as HateAid that support victims of online abuse
  • Trusted individuals or mental health professionals

Step 3: Submit your complaint to User Rights

If the platform rejects your report or provides an inadequate response, you have the right to appeal. User Rights can support you in this process. We are independent, free to use, and the EU's first certified out-of-court dispute settlement body for social media platforms under Article 21 of the Digital Services Act (DSA).

Not sure what to expect? We have published anonymised decisions from real hate speech cases — including cases where we overturned platform decisions and others where we upheld them. See exemplary decisions at the bottom of this page ↓

Step 4:  Combine different approaches

Submitting a complaint to User Rights does not prevent you from pursuing other options at the same time. In some cases, combining different approaches can be more effective.

File a criminal complaint 

If the content may constitute a criminal offence under national law – for example, incitement to hatred under the Legge Mancino in Italy or equivalent provisions in Germany – you can also report it to the police. Keep in mind that identifying anonymous perpetrators can be difficult, and criminal proceedings may take time.

Take civil action 

If a platform fails to remove clearly illegal content, you may consider legal action against the platform. Court proceedings can be more effective in certain cases, but tend to be time-consuming and costly.
 

Recent Decisions: Hate Speech Cases Reviewed by User Rights

These real cases involve users who brought hate speech complaints to User Rights after completing the platform's internal review process. They demonstrate what we assess, how we analyse cases, and what outcomes are possible.

 

Instagram: Content comparing Muslim people to a disease (April 2025) 

A user reported a post calling to "free Europe from cancer," accompanied by a crescent and star emoji — a reference to Islam. Instagram reviewed the report and decided not to remove the content. User Rights overturned this decision, finding that the post violated Instagram's Policy on Hateful Conduct. The content dehumanised Muslim people by comparing them to a disease, which is explicitly prohibited under the platform's rules. → Read the full decision

 

TikTok: Reported account allegedly promoting hateful ideologies (August 2025) 

A user reported a TikTok account suspected of spreading extremist content, including alleged right-wing symbols and references to self-harm. User Rights conducted a detailed review and upheld TikTok's decision to retain it. The alleged symbols were not present in any of the content, and the posts — while using language associated with "femcel" communities — were interpreted as social critique of gender-based violence rather than incitement to hate. → Read the full decision

 

These decisions are published in anonymised form. Under the European Digital Services Act (DSA), User Rights' decisions are not automatically binding on platforms. However, platforms are required to consider them in good faith and report back on implementation.