There are around 200 deaths of young people by suicide every year . Every single one of these deaths is a tragedy that devastates families, friends and communities.
In recent weeks the role social media and the internet may have played in these deaths has been hitting the headlines. The Government’s White Paper on Internet Safety is due for publication any day now, so it’s critical for us to look at what the research says and the policy changes that are needed to better protect young people online.
We know that suicide is complex and is rarely caused by one thing. Research around the role of the online environment is mixed and tells us that there can be both positive and negative outcomes for young people. But the current evidence can only tell us so much. For instance, there are still many questions around whether there is a causal relationship; for example, does browsing suicide related content “cause” an increase in suicidal thinking, or do people who are suicidal search out this content to seek help?
We recently undertook research with the University of Bristol to unpick this further. We found that at least a quarter of patients who had self-harmed with high suicidal intent had used the internet in connection with their self-harm . Likewise, a national inquiry into suicides by young people found there was suicide-related internet use in nearly half of suicides by young people every year . This isn’t surprising though when you think that for many of us, and particularly for young people, the online world is simply a normal part of everyday life. That’s why it’s so important we understand how and when it’s being used.
Internet use relating to suicide often includes searching for information about suicide, viewing help materials, searching for information about methods or posting messages with suicidal content. Our research suggests that there are different types of Internet use: individuals with lower levels of distress wanted to find out more about suicide and browsed suicide content online while also looking for help and opportunities to communicate with others, while individuals with higher levels of distress were found to be looking specifically for information about method and would actively avoid help and support .
And what about self-harm? Research sets out a similarly mixed picture around people’s motivations for posting self-harm related content and the impact of viewing it. There is certainly evidence that suggests some self-harm imagery can glorify, sensationalise and normalise self-harm . But, there is also evidence that people who share images or talk about self-harm on social media are doing so to communicate distress, to share their journeys of recovery, and to provide support and information to others .
All too often, families speak of missed opportunities that could have been used to potentially prevent a young person from taking their own life. Preventing suicide requires bold action across our communities, whether they are online or offline. Managing access to harmful content across the online environment needs to play a central role in this, but it also has to be part of a wider plan of action.
Online platforms need to make it harder to find harmful content (including that which includes details of methods and lethality, as well as triggering content), they need robust safety policies in place with highly trained moderators to ensure these policies are being implemented. They need to find better ways of identifying and restricting content that is glamorising and encouraging suicide and self-harm. It is an offence to incite suicide online, but we need this law enforced and improved ways of recognising it.
There is also an important role online platforms should play in promoting support and help to people who may be struggling – for example, using their algorithms to suggest positive and helpful content. They could also provide and fund safe online spaces where young people can get the support they need, where they can connect with peers, where they can share thoughts in the way that works for them, and where they can find additional support if they need it. What is needed is more research and investment in this area. And we need to involve young people to help co-produce all these solutions and make sure they work for them.
It’s essential that more resource is put into moderation to ensure that social media companies don’t just automatically block all material around self-harm and give the message that self-harm must be hidden. Effective moderation is also key to keep discussions safe and to swiftly putting safeguarding measures in place.
Samaritans’ Media Guidelines and regulation have helped improve responsible media reporting on suicide, reducing the risks of imitational suicide, and the online environment also needs regulation.
We know suicide is complex and we need to address the wider issues being faced by young people. Challenges need to be tackled as part of a broader approach to children and young people’s mental health, rooted in public health. Teachers, clinicians, and other professionals working with children all have a role to play in understanding children and young people’s digital use and helping them to keep themselves safe.
We’re calling on the Government to work with online platforms and make sure bold policies are put in place to manage suicide and self-harm related content online. As well as investing in the solutions that are urgently needed to provide online help and support when young people may need it most.
Originally posted here
Samaritans is a finalist in the 2019 Impact Awards, you can vote for them here.