Online spaces allow people to connect, share ideas, and build communities, but they can also expose individuals to harmful behaviours such as harassment, hate speech, and abusive content. Understanding what these behaviours look like is the first step toward responding safely and confidently.
Harassment refers to repeated or targeted actions intended to intimidate, embarrass, or upset someone. This may include ongoing negative comments, threatening messages, or spreading harmful rumours online.
Hate speech involves language or content that attacks or discriminates against individuals or groups based on identity factors such as race, religion, gender, sexuality, disability, or nationality. It can appear as jokes, memes, or comments that normalise exclusion or hostility.
Harmful content includes posts, images, or videos that promote violence, self-harm, discrimination, or unsafe behaviour, even if they are not directed at a specific person.
Online harassment, hate speech, and harmful content can have real and lasting consequences. What happens in digital spaces does not stay online it can affect a person’s confidence, relationships, participation in communities, and overall wellbeing. Repeated exposure to negative or abusive interactions may lead to stress, anxiety, or a reluctance to engage online or offline.
Harmful behaviour also shapes the wider digital environment. When abusive content goes unchallenged, it can normalise disrespect, discourage people from speaking up, and create unsafe online spaces.
Understanding why these issues matter helps learners recognise their role in creating safer digital environments. Knowing when to report, when to disengage, and how to support others contributes to a more respectful and inclusive online culture.
You are not powerless when you come across harassment, hate speech, or harmful content online. Small, informed actions can make a significant difference in protecting yourself and others while helping to create safer digital spaces.
Start by recognising when content crosses the line. Repeated insults, threats, or comments targeting someone’s identity are clear signals that action may be needed. Use platform tools such as report, block, or mute to reduce exposure and alert moderation teams. If a situation feels overwhelming, step back from the conversation and prioritise your wellbeing.
If you support young people or peers, create space for open conversations without judgement. Listen actively, avoid blame, and help them explore safe options such as reporting, saving evidence, or reaching out to trusted support services. Taking action does not always mean confronting someone publicly sometimes the safest choice is to disengage, document what happened, and seek support.
As a youth leader, you can play an important role in supporting others who experience harassment, hate speech, or harmful content online. Your response can help someone feel heard, respected, and safer when they are unsure how to deal with a situation.
Create a supportive and non-judgemental space where others feel comfortable sharing their experiences. Listen carefully, take their concerns seriously, and avoid dismissing what happened. Encourage practical steps such as saving evidence, adjusting privacy settings, or using reporting tools, while respecting each person’s feelings and choices.
It is also important to know when to seek additional support. If a situation involves serious threats, exploitation, or safety concerns, reach out to a trusted adult, youth worker, or appropriate support service. By modelling respectful communication and healthy digital boundaries, youth leaders can help build confidence, awareness, and safer online habits within their peer groups.
Online safety is a shared responsibility between users, youth professionals, communities, and digital platforms.
Individuals play a role by recognising harmful behaviour, setting clear digital boundaries, and using available tools such as reporting, blocking, or adjusting privacy settings. Youth leaders can support this process by encouraging open conversations, promoting digital literacy, and guiding young people toward safe and responsible online choices.
Platforms and service providers also contribute through safety-by-design features, clear community guidelines, and accessible reporting systems. When users understand their role and take small, practical actions, online spaces become more respectful, inclusive, and supportive for all.
Spending time online is part of everyday life, but you may sometimes come across harmful, hateful, or upsetting content such as offensive comments, targeted harassment, or posts spreading hate or misinformation. Learning how to respond in a structured way helps protect your wellbeing and supports safer online communities.
Look for repeated insults, threats, or language that targets a person or group based on identity. Ask yourself: Is this respectful disagreement, or is it harmful behaviour?
Before reacting, take a screenshot or save the content if it is serious. Keeping evidence can help if you decide to report it or seek further support later.
Open the post or message, select the report option, and choose the most accurate category (for example, harassment or hate speech). You can also block, mute, or restrict the account to reduce further exposure.
If the situation feels overwhelming, step away from the screen. Taking a break is an active self-care strategy that helps you respond calmly and safely rather than reacting in the moment.
Reflect: What signs helped you decide the content needed to be reported? What action would you take first in a similar situation?
You are scrolling through a social media app and notice repeated comments targeting someone’s identity with insulting language. The content makes you feel uncomfortable and unsure how to respond.
What you could do:
Trust your judgement and pause before reacting. Take a screenshot to keep a record of what you saw. Open the options menu on the post or message and select the report function, choosing the most relevant category. If needed, block or mute the account to reduce further contact and protect your space online.
Why it matters:
Taking practical action helps protect your wellbeing, supports safer online environments, and reduces the likelihood that harmful behaviour continues unchecked.
You receive a message online that makes you feel uneasy or uncomfortable. Instead of keeping it to yourself, you decide to reach out to someone you trust, such as a friend, youth worker, or family member. You might start the conversation by saying: “Something happened online that made me feel uncomfortable. Can I talk to you about it?”
Sharing what happened helps you feel heard and supported, even if you are unsure what steps to take next. You do not need to have all the answers before speaking up.
Why it matters
Trying to manage harassment or upsetting experiences alone can increase stress and make situations feel more overwhelming. Speaking to a trusted person can provide reassurance, practical guidance, and support in deciding safe and appropriate next steps.
Most social media platforms include built-in reporting tools designed to help users respond to harassment, hate speech, or harmful content. Use reporting features when you see behaviour that targets someone’s identity, spreads hate, or promotes violence or harmful actions.
How to use reporting tools:
Reporting is a constructive action that supports accountability and helps platforms identify harmful behaviour. It also contributes to creating online environments where respect and safety are prioritised.
You are watching a livestream when another user starts posting repeated messages encouraging self-harm in the chat. Instead of engaging with the person directly, you open the chat options, select “Report,” and choose the category related to harmful or dangerous content. You take a screenshot of the messages as evidence and then hide or leave the livestream to protect your wellbeing.
Why it matters
Reporting serious or harmful behaviour helps platforms respond more quickly and may prevent others from being affected. Taking action in a calm, structured way allows you to support safer online environments without putting yourself at risk.
Adjusting your privacy settings is a practical way to reduce unwanted contact and manage how others interact with you online. Choosing who can view your content, send messages, or leave comments helps create clearer digital boundaries and lowers the risk of harassment or harmful interactions.
Start by reviewing your settings on platforms you use regularly. Check who can see your posts or stories, who can tag or mention you, and who is allowed to contact you directly. Options such as “Friends Only,” “Private Account,” or custom lists allow you to decide how visible you want to be in different online spaces.
On Snapchat:
On WhatsApp or Facebook:
Why it matters
Setting clear privacy boundaries helps you stay in control of your digital space. Customising your settings can reduce unwanted contact, support your wellbeing, and create a more positive and respectful online experience.
Day 1: Understanding What’s Harmful
Goal: Learn to recognise hate speech, harassment, harmful content, and general negativity online.
Activity:
Read four short online examples (see final slides). For each, decide:
Reflection Prompt:
Have you ever seen something similar online? How did you respond or feel at that time?
Day 2: Know Your Tools
Goal: Explore how to report or block users on different platforms.
Activity:
Reflection Prompt:
Which reporting tools were new to you? Which platform felt most supportive for reporting abuse?
Day 3: Take Back Control
Goal: Set your personal privacy boundaries online.
Activity:
Challenge:
Screenshot or describe one setting you changed and why.
Day 4: What Would You Say?
Goal: Build confidence in responding safely to harmful content.
Activity:
Read three online scenarios (see final slides). For each, choose how you would respond:
Reflection Prompt:
Day 5: Make a Safety Plan
Goal: Create your own digital action plan for handling harmful content or harassment.
Activity (Individual):
Reflection Prompt:
What’s one thing you’ll do differently online after completing this module?
Think about a moment when I noticed something that felt wrong, such as repeated insults, pressure, or disrespectful behaviour. Reflecting on what I saw and how I reacted helps me understand my own warning signs and decision-making.
Consider whether I ignored it, reported it, blocked someone, or asked for support. Thinking about alternative actions helps me build confidence in using safer online strategies.
Reflect on who can see my posts, stories, or messages right now. Small changes in privacy settings can help me feel more in control of my digital space.
Think about personal habits that protect my wellbeing, such as taking breaks, talking to someone I trust, or stepping away from stressful conversations.
What it’s about:
A global campaign led by civil rights organisations that encouraged companies to pause advertising on major social media platforms to highlight concerns about online hate speech, harassment, and misinformation.
Why it matters:
The campaign shows how communities and organisations can take collective action to push for safer online environments. It highlights the importance of reporting harmful content, holding platforms accountable, and encouraging stronger digital safety standards.
Key themes:
Discussion prompts:
1 in 3 young people report being exposed to online hate or harassment (UNICEF, 2023).
Only 40% of teens who experience cyberbullying tell an adult or report it (Ofcom, 2023).
Fewer than half of social media users know how to report hate speech or abusive content on the platforms they use most (EU Kids Online, 2022).
Harassment is more likely to happen in private DMs or group chats than in public comment sections (EU Kids Online, 2022).
Taking screenshots of harmful content before reporting increases the chances of effective action if the post is deleted.
Blocking a user prevents them from seeing your profile or contacting you even if they create a new account (with enhanced safety settings on most platforms).
Online spaces can be positive places to connect and learn, but they also come with risks. Understanding how to recognise harmful behaviour, use digital tools safely, and set clear boundaries helps create a safer online experience for yourself and others.
Stay safe and informed: trust your instincts, if something feels wrong online, pause and take it seriously.
Take action when needed: harmful content can be reported, blocked, or muted using built-in platform tools.
Keep evidence: documenting incidents with screenshots can support reporting and help you feel more prepared.
Seek support: you do not have to manage online challenges alone reaching out to a trusted person can help you decide safe next steps.
Protect your space: adjusting privacy settings allows you to control who can view or interact with your content.
Respond safely: avoiding engagement with harmful comments can prevent situations from escalating.
Support others: checking in with someone who is being targeted can help build safer and more respectful online communities.
Use your voice responsibly: reporting harmful behaviour contributes to healthier digital environments for everyone.
Day 1: Understanding What’s Harmful Example Texts:
Example 1
“People like you shouldn’t even be allowed here. Go back to where you came from.”
Why it crosses the line:
Targets identity and promotes exclusion → Hate speech.
Example 2
“Wow… another girl gamer? Sure you’re not just here for attention?”
Why it’s problematic:
Not direct hate speech but reinforces stereotypes and creates an uncomfortable environment.
Example 3
“Send me your Snap or I’ll post that photo everyone laughs at.”
Why it crosses the line:
Threats + coercion → harassment and possible exploitation.
Example 4
“I disagree with your opinion, but I see your point.”
Why it’s safe:
Respectful disagreement without attacking the person.
Scenario 1: Targeted Harrassment
Someone you don’t know starts leaving comments on your posts like: “You better watch yourself.”, “I know what school you go to.”, “You won’t be laughing soon.” They continue commenting and sending messages over several days.
Possible responses to discuss:
Tell a trusted adult or report it to the police
Scenario 2: Group Chat Pressure
You’re in a group chat where someone keeps posting memes making fun of a classmate’s appearance. Others react with laughing emojis.
Possible responses to discuss:
Scenario 2: Influencer Content
An influencer keeps appearing on your TikTok For You Page. Their videos focus heavily on extreme dieting and promoting weight-loss medication as a quick fix. The comments are full of people comparing their bodies. You notice the videos are starting to make you feel insecure, stressed, or not “good enough.”
Possible responses to discuss:
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project Number: 2024-2-PT02-KA220-YOU-000287246