Reporting Harassment, Hate Speech, and Harmful Content

At the end of this module, I can:

  • Recognise and distinguish between harassment, hate speech, and harmful content online by identifying key warning signs and analysing realistic digital scenarios.
  • Understand the emotional, social, and community impact of online abuse, recognising how harmful behaviours affect wellbeing, participation, and digital safety.
  • Identify where and how to report harmful or abusive content across major platforms by following clear reporting steps and documenting evidence appropriately.
  • Apply practical strategies such as blocking, muting, restricting, or adjusting privacy settings to protect personal wellbeing and maintain healthy digital boundaries.
  • Support friends, peers, or service users who experience online abuse through active listening, safe responses, and signposting to appropriate support services.
  • Take informed action to promote safer and more respectful online environments by modelling positive behaviour and making responsible digital decisions.

Introductory Theory

Understanding Harassment, Hate Speech, and Harmful Content

Online spaces allow people to connect, share ideas, and build communities, but they can also expose individuals to harmful behaviours such as harassment, hate speech, and abusive content. Understanding what these behaviours look like is the first step toward responding safely and confidently.

Harassment refers to repeated or targeted actions intended to intimidate, embarrass, or upset someone. This may include ongoing negative comments, threatening messages, or spreading harmful rumours online.

Hate speech involves language or content that attacks or discriminates against individuals or groups based on identity factors such as race, religion, gender, sexuality, disability, or nationality. It can appear as jokes, memes, or comments that normalise exclusion or hostility.

Harmful content includes posts, images, or videos that promote violence, self-harm, discrimination, or unsafe behaviour, even if they are not directed at a specific person.

Why It's a Big Deal

Online harassment, hate speech, and harmful content can have real and lasting consequences. What happens in digital spaces does not stay online it can affect a person’s confidence, relationships, participation in communities, and overall wellbeing. Repeated exposure to negative or abusive interactions may lead to stress, anxiety, or a reluctance to engage online or offline.

Harmful behaviour also shapes the wider digital environment. When abusive content goes unchallenged, it can normalise disrespect, discourage people from speaking up, and create unsafe online spaces. 

Understanding why these issues matter helps learners recognise their role in creating safer digital environments. Knowing when to report, when to disengage, and how to support others contributes to a more respectful and inclusive online culture.

The Effects of Harmful Online Behaviour

You are not powerless when you come across harassment, hate speech, or harmful content online. Small, informed actions can make a significant difference in protecting yourself and others while helping to create safer digital spaces.

Start by recognising when content crosses the line. Repeated insults, threats, or comments targeting someone’s identity are clear signals that action may be needed. Use platform tools such as report, block, or mute to reduce exposure and alert moderation teams. If a situation feels overwhelming, step back from the conversation and prioritise your wellbeing.

If you support young people or peers, create space for open conversations without judgement. Listen actively, avoid blame, and help them explore safe options such as reporting, saving evidence, or reaching out to trusted support services. Taking action does not always mean confronting someone publicly sometimes the safest choice is to disengage, document what happened, and seek support.

As a youth leader, you can play an important role in supporting others who experience harassment, hate speech, or harmful content online. Your response can help someone feel heard, respected, and safer when they are unsure how to deal with a situation.

Create a supportive and non-judgemental space where others feel comfortable sharing their experiences. Listen carefully, take their concerns seriously, and avoid dismissing what happened. Encourage practical steps such as saving evidence, adjusting privacy settings, or using reporting tools, while respecting each person’s feelings and choices.

It is also important to know when to seek additional support. If a situation involves serious threats, exploitation, or safety concerns, reach out to a trusted adult, youth worker, or appropriate support service. By modelling respectful communication and healthy digital boundaries, youth leaders can help build confidence, awareness, and safer online habits within their peer groups.

Creating Safe Online Spaces as a Youth Leader

Online Safety Is a Shared Responsibility

Online safety is a shared responsibility between users, youth professionals, communities, and digital platforms. 

Individuals play a role by recognising harmful behaviour, setting clear digital boundaries, and using available tools such as reporting, blocking, or adjusting privacy settings. Youth leaders can support this process by encouraging open conversations, promoting digital literacy, and guiding young people toward safe and responsible online choices.

Platforms and service providers also contribute through safety-by-design features, clear community guidelines, and accessible reporting systems. When users understand their role and take small, practical actions, online spaces become more respectful, inclusive, and supportive for all.

Tools and Strategies

Self-care Practice 1: Practice Reporting Harmful Content

Spending time online is part of everyday life, but you may sometimes come across harmful, hateful, or upsetting content such as offensive comments, targeted harassment, or posts spreading hate or misinformation. Learning how to respond in a structured way helps protect your wellbeing and supports safer online communities.

  • Step 1: Recognise when content crosses the line

Look for repeated insults, threats, or language that targets a person or group based on identity. Ask yourself: Is this respectful disagreement, or is it harmful behaviour?

  • Step 2: Pause and document

Before reacting, take a screenshot or save the content if it is serious. Keeping evidence can help if you decide to report it or seek further support later.

  • Step 3: Use platform tools

Open the post or message, select the report option, and choose the most accurate category (for example, harassment or hate speech). You can also block, mute, or restrict the account to reduce further exposure.

  • Step 4: Protect your wellbeing

If the situation feels overwhelming, step away from the screen. Taking a break is an active self-care strategy that helps you respond calmly and safely rather than reacting in the moment.

Reflect: What signs helped you decide the content needed to be reported? What action would you take first in a similar situation?

Example

You are scrolling through a social media app and notice repeated comments targeting someone’s identity with insulting language. The content makes you feel uncomfortable and unsure how to respond.

What you could do:

Trust your judgement and pause before reacting. Take a screenshot to keep a record of what you saw. Open the options menu on the post or message and select the report function, choosing the most relevant category. If needed, block or mute the account to reduce further contact and protect your space online.

Why it matters:

Taking practical action helps protect your wellbeing, supports safer online environments, and reduces the likelihood that harmful behaviour continues unchecked.

Self-care Practice 2: Talk to Someone You Trust

  • You do not have to deal with online hate or harassment on your own. Whether it is happening to you or you have witnessed it affecting someone else, sharing your experience with a trusted person can help you feel supported and less isolated.
  • Start by choosing someone you feel comfortable speaking to, such as a friend, family member, teacher, or youth worker. Explain what happened, how it made you feel, and what you are unsure about. A trusted person can help you think through safe next steps, such as reporting, adjusting privacy settings, or taking a break from the platform.
  • Reaching out is a proactive way to protect your wellbeing. Having someone listen without judgement can reduce stress, build confidence, and help you make clearer decisions about how to respond safely.

Example

You receive a message online that makes you feel uneasy or uncomfortable. Instead of keeping it to yourself, you decide to reach out to someone you trust, such as a friend, youth worker, or family member. You might start the conversation by saying: “Something happened online that made me feel uncomfortable. Can I talk to you about it?”

Sharing what happened helps you feel heard and supported, even if you are unsure what steps to take next. You do not need to have all the answers before speaking up.

Why it matters

Trying to manage harassment or upsetting experiences alone can increase stress and make situations feel more overwhelming. Speaking to a trusted person can provide reassurance, practical guidance, and support in deciding safe and appropriate next steps.

Approach and Tool 1: Use Reporting Systems

Most social media platforms include built-in reporting tools designed to help users respond to harassment, hate speech, or harmful content.  Use reporting features when you see behaviour that targets someone’s identity, spreads hate, or promotes violence or harmful actions. 

How to use reporting tools:

  1. Open the post, comment, or message you want to report.
  2. Select the options menu (often shown as three dots) and choose “Report.”
  3. Pick the category that best matches the situation, such as harassment or hate speech.
  4. If the content is serious, take a screenshot first so you have a record.

Reporting is a constructive action that supports accountability and helps platforms identify harmful behaviour. It also contributes to creating online environments where respect and safety are prioritised.

Example

You are watching a livestream when another user starts posting repeated messages encouraging self-harm in the chat. Instead of engaging with the person directly, you open the chat options, select “Report,” and choose the category related to harmful or dangerous content. You take a screenshot of the messages as evidence and then hide or leave the livestream to protect your wellbeing.

Why it matters

Reporting serious or harmful behaviour helps platforms respond more quickly and may prevent others from being affected. Taking action in a calm, structured way allows you to support safer online environments without putting yourself at risk.

Approach and Tool 2: Adjust Your Privacy Settings

Adjusting your privacy settings is a practical way to reduce unwanted contact and manage how others interact with you online. Choosing who can view your content, send messages, or leave comments helps create clearer digital boundaries and lowers the risk of harassment or harmful interactions.

Start by reviewing your settings on platforms you use regularly. Check who can see your posts or stories, who can tag or mention you, and who is allowed to contact you directly. Options such as “Friends Only,” “Private Account,” or custom lists allow you to decide how visible you want to be in different online spaces.

Example and how to use

On Snapchat:

  • Open Settings and go to Privacy Controls.
  • Change “Who Can Contact Me” to “My Friends” to reduce unwanted messages.
  • Adjust story visibility to “Friends Only” or create a custom list to decide who can view your content.

On WhatsApp or Facebook:

  • Review privacy options that control who can see your profile photo, status, posts, or online activity.
  • Adjust messaging settings so only trusted contacts can reach you.
  • Check tagging or comment permissions to manage how others interact with your profile.

Why it matters

Setting clear privacy boundaries helps you stay in control of your digital space. Customising your settings can reduce unwanted contact, support your wellbeing, and create a more positive and respectful online experience.

Activity Time

Day 1: Understanding What’s Harmful

Goal: Learn to recognise hate speech, harassment, harmful content, and general negativity online.

Activity:

Read four short online examples (see final slides). For each, decide:

  • Is this content Harmful, Problematic, or Safe?
  • Write a short note explaining your choice what made it cross the line (e.g. targeting identity, using threats, spreading hate).

Reflection Prompt:

Have you ever seen something similar online?  How did you respond or feel at that time?

Day 2: Know Your Tools

Goal: Explore how to report or block users on different platforms.

Activity:

  • Open your own social media apps (Instagram, TikTok, YouTube, etc.) and find the steps to report or block someone.
  • Take screenshots or note the path (e.g. “Instagram: tap ⋯ → Report → Hate Speech”).
  • Compare which platform makes it easiest to report harmful content.

Reflection Prompt:

Which reporting tools were new to you? Which platform felt most supportive for reporting abuse?

Day 3: Take Back Control

Goal: Set your personal privacy boundaries online.

Activity:

  • Review your privacy settings on one or more platforms you use.
  • Update one setting (e.g. who can comment, message, or tag you).
  • Write a short reflection on how that change might make you feel safer or more in control.

Challenge:

Screenshot or describe one setting you changed and why.

 

Day 4: What Would You Say?

Goal: Build confidence in responding safely to harmful content.

Activity:

Read three online scenarios (see final slides). For each, choose how you would respond:

  • Ignore it
  • Report it
  • Block the person
  • Speak up safely or ask for help
  • Then explain why you made that choice.

Reflection Prompt:

Day 5: Make a Safety Plan

Goal: Create your own digital action plan for handling harmful content or harassment.

Activity (Individual):

  • Complete your Digital Safety Plan
    • Who can I talk to if something online upsets me?
    • What steps will I take to report or block harmful content?
    • How will I take care of my wellbeing afterward?

Reflection Prompt:

What’s one thing you’ll do differently online after completing this module?

Reflection Questions

Case Study: Here’s a helpful video

Case Study: The #StopHateForProfit Campaign

What it’s about:

A global campaign led by civil rights organisations that encouraged companies to pause advertising on major social media platforms to highlight concerns about online hate speech, harassment, and misinformation.

Why it matters:

The campaign shows how communities and organisations can take collective action to push for safer online environments. It highlights the importance of reporting harmful content, holding platforms accountable, and encouraging stronger digital safety standards.

Key themes:

  • Digital responsibility: users, organisations, and platforms all play a role in addressing harmful behaviour online.
  • Platform accountability: calls for clearer enforcement of community guidelines and stronger moderation systems.
  • Safer reporting tools: the need for accessible systems that allow users to report harmful content effectively.
  • Community action: how collective voices can influence change in digital spaces.
  • Online wellbeing: recognising the impact of hate speech and harmful content on individuals and communities.

Discussion prompts:

  • Do you think public campaigns can influence how platforms manage harmful content?
  • What improvements would you like to see in reporting or safety tools on social media?
  • How can young people contribute to safer and more respectful online communities?

Recommended Practice

  • Trust your instincts: if something feels wrong or uncomfortable online, pause and take it seriously before responding.
  • Use platform tools: report, block, mute, or restrict accounts that cross boundaries or promote harmful behaviour.
  • Keep evidence: take screenshots of harmful content before reporting so you have a record if the content is removed.
  • Talk to someone you trust: reaching out to a youth worker, parent, teacher, or friend can help you feel supported and decide safe next steps.
  • Adjust your privacy settings: review who can view your posts, send messages, or tag you to reduce unwanted contact.
  • Avoid engaging with trolls: responding to harmful comments can escalate situations blocking and reporting is often safer.
  • Support others: check in with people who may be targeted online and encourage them to seek help or report safely.
  • Create positive digital spaces: model kindness, respect, and inclusivity to help build healthier online communities.

Did you know?

Fact #1

Fact #2

Fact #3

1 in 3 young people report being exposed to online hate or harassment (UNICEF, 2023).

Only 40% of teens who experience cyberbullying tell an adult or report it (Ofcom, 2023).

Fewer than half of social media users know how to report hate speech or abusive content on the platforms they use most (EU Kids Online, 2022).

Fact #4

Fact #5

Fact #6

Harassment is more likely to happen in private DMs or group chats than in public comment sections (EU Kids Online, 2022).

Taking screenshots of harmful content before reporting increases the chances of effective action if the post is deleted.

Blocking a user prevents them from seeing your profile or contacting you even if they create a new account (with enhanced safety settings on most platforms).

Quiz Time

Key Takeaways

Online spaces can be positive places to connect and learn, but they also come with risks. Understanding how to recognise harmful behaviour, use digital tools safely, and set clear boundaries helps create a safer online experience for yourself and others.

Stay safe and informed: trust your instincts, if something feels wrong online, pause and take it seriously.

Take action when needed: harmful content can be reported, blocked, or muted using built-in platform tools.

Keep evidence: documenting incidents with screenshots can support reporting and help you feel more prepared.

Seek support: you do not have to manage online challenges alone reaching out to a trusted person can help you decide safe next steps.

Protect your space: adjusting privacy settings allows you to control who can view or interact with your content.

Respond safely: avoiding engagement with harmful comments can prevent situations from escalating.

Support others: checking in with someone who is being targeted can help build safer and more respectful online communities.

Use your voice responsibly: reporting harmful behaviour contributes to healthier digital environments for everyone.

Additional Resources

  1. Childnet International. (2023). Online hate speech: Responding and reporting guide for young people. https://www.childnet.com/resources/hot-topics/hate-speech/
  2. European Commission. (2022). Code of conduct on countering illegal hate speech online – Factsheet. https://ec.europa.eu/newsroom/just/items/709044
  3. Ofcom. (2023). Children and parents: Media use and attitudes report 2023. https://www.ofcom.org.uk/__data/assets/pdf_file/0022/267040/Children-and-parents-media-use-and-attitudes-report-2023.pdf
  4. UK Safer Internet Centre. (2023). How to report harmful content online. https://reportharmfulcontent.com
  5. UNESCO. (2022). Addressing hate speech through education. https://unesdoc.unesco.org/ark:/48223/pf0000381584
  6. UNICEF. (2023). Digital harassment and abuse: What young people need to know. https://www.unicef.org/stories/how-respond-online-harassment
  7. Common Sense Media. (2021). Hate speech, bias, and identity online: A guide for teens. https://www.commonsense.org/education/digital-citizenship/lesson/hate-speech-online

Activity Time

Day 1: Understanding What’s Harmful Example Texts:

Example 1

“People like you shouldn’t even be allowed here. Go back to where you came from.”

Why it crosses the line:
Targets identity and promotes exclusion → Hate speech.

Example 2

“Wow… another girl gamer? Sure you’re not just here for attention?”

Why it’s problematic:
Not direct hate speech but reinforces stereotypes and creates an uncomfortable environment.

Example 3 

“Send me your Snap or I’ll post that photo everyone laughs at.”

Why it crosses the line:
Threats + coercion → harassment and possible exploitation.

Example 4 

“I disagree with your opinion, but I see your point.”

Why it’s safe:
Respectful disagreement without attacking the person.

Scenario 1: Targeted Harrassment

Someone you don’t know starts leaving comments on your posts like: “You better watch yourself.”, “I know what school you go to.”, “You won’t be laughing soon.” They continue commenting and sending messages over several days.

Possible responses to discuss:

  • Do not reply.
  • Screenshot messages and account
  • Block the account
  • Report the account to the platform

Tell a trusted adult or report it to the police

Scenario 2: Group Chat Pressure

 

You’re in a group chat where someone keeps posting memes making fun of a classmate’s appearance. Others react with laughing emojis.

 

Possible responses to discuss:

  • Speak up respectfully
  • Leave chat
  • Report to platform or trusted adult

Scenario 2: Influencer Content

 

An influencer keeps appearing on your TikTok For You Page. Their videos focus heavily on extreme dieting and promoting weight-loss medication as a quick fix. The comments are full of people comparing their bodies. You notice the videos are starting to make you feel insecure, stressed, or not “good enough.”

 

Possible responses to discuss:

  • Use the “Not Interested” option on the video
  • Mute or unfollow the account
  • Restrict similar content in your settings
  • Report the content if it promotes unsafe or misleading health advice
  • Talk to someone you trust about how the content is making you feel

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project Number: 2024-2-PT02-KA220-YOU-000287246

elGreek
Κύλιση στην κορυφή