Can algorithms keep us from suicide?

Suicide has become one of the leading cause of deaths around the world, and has sometimes found an audience on social media. But algorithms can help us fight back.

By Adrian Baker

Read time: 8 mins

Image credit: Evgeniy Koryakin on Unsplash

Share article

At times, we can be so good at overcoming the complex, and so poor at managing something that should be simple.   We’ve identified waves that warp space and time, but struggle to identify people at risk of suicide; we can reattach limbs and cure some cancers, yet for some people we can’t stop their depression; we’re building computers that harness the laws of quantum mechanics, but can’t build a health system capable of helping those with mental health conditions. 

Every 40 seconds someone dies from suicide.  That’s 800,000 people every year.  In 15-29 year olds, suicide is the second leading cause of death globally, and is responsible for every 1 in 100 of all deaths worldwide.  And that’s not including attempted suicides.  And that’s not including depression.  And that’s not including anxiety.   

Poor mental health is the problem we’ve found so difficult to grasp; highlighting science and technology’s hitherto weakness at tackling the intangible.  But this is slowly changing, albeit with a bitterly ironic double-edged sword. 

In those struggling with mental health conditions, technology can be a lifeline and a minefield.  There are hundreds of websites and apps that aim to provide mental health support, showing technology’s promise of opening up a world of resources.  At the same time, technology opens up teens to a world of cyberbullying, and perverse websites exist that encourage suicide. 

Predicting suicide with social media and search engines

Facebook is a perfect example of the double edged-sword of technology and social media.  Sometimes the go-to place for cyber-bullies, Facebook has been the unfortunate home of several live-streamed suicides and attempted suicides.  Initially, Facebook relied on users to be proactive in reporting suicidal content, but found that this method was sometimes too late.  In response, Facebook launched an artificial intelligence program based on pattern recognition that scans posts, images, or words for risks of suicide.  Once a potential case is identified by the algorithm, it is sent to human reviewers to make decisions on the next steps to take.    

Google has also attempted to tackle mental ill health, by launching an initiative where a knowledge panel opens up if someone searches for symptoms of clinical depression.  That knowledge panel allows people to check if they’re clinically depressed according to the validated PHQ-9 questionnaire.  Both of these efforts – important a step as they are – have a number of challenges to overcome.  For a start, questions can rightly be asked as to the extent these algorithms should go through rigorous research to show efficacy.  That’s not only to show that such algorithms have the desired result, but also to ensure that any risk of harm is minimised.  For example – what happens if an algorithm makes an error in identifying someone who might attempt suicide?  Facebook itself has acknowledged that false positives is an issue, but what are the processes of dealing with that error to ensure a ‘cry wolf’ scenario doesn’t occur?  How predictive do we want an algorithm to be?  Do we want to predict an imminent suicide or one that might happen in the coming months?  The further into the future, the greater the error rate, but also the better chance of averting a crisis.

Gaining trust back

The answer to these questions will have a strong bearing on what governments and regulators do.  At the moment, Facebook’s and Google’s initiatives aren’t allowed in Europe because of its strict privacy laws, and given the Cambridge Analytica fiasco, it is unlikely that Europe will give Facebook more leeway.  But if these companies can begin providing consistent evidence of having a beneficial impact on depression and suicide rates, there may be strong moral pressure for such initiatives to be allowed. 

Tech companies are increasingly discovering the burden of responsibility that great power brings.  Facebook in particular has come under increasing scrutiny in the last couple of years, and its honeymoon period with the public might well be over.  The Cambridge Analytica scandal – and Facebook’s handling of the aftermath – damaged its credibility and called into question whether they can be trusted with our data.  And as it discovered with its responsibility to tackle fake news on its platform, the weight of public expectation can quite often force a greater degree of involvement that they might initially have anticipated.  If tech companies can avert mental health problems, to what extent are they duty-bound to divert resources into tackling them?  And to what extent should tech companies intervene in common mental health problems such as anxiety or low-to-moderate depression?  Google has shown that it can take some steps, but what about Twitter, Instagram, and other social media outlets?

Reading minds

It isn’t only tech and social media companies that have sought to tackle suicide.  Almost 80% of people who commit suicide hide their suicidal thoughts from clinicians, making a difficult task even harder for those in the front lines.  That’s why scientists have turned to combining advances in neuroscience with breakthroughs in machine learning.  In a joint project that spanned multiple universities, lead researchers Marcel Just from Carnegie Mellon and David Brent from the University of Pittsburgh used a function magnetic imaging (fMRI) scanner to analyse how the brains of individuals change as they are presented with certain concepts.  As 34 participants sat inside the scanner, they were asked to think about the words the researchers presented to them.  Seventeen of the participants were known to have suicidal thoughts, and 17 were used as a control.  The researchers identified which of the words showed the most difference between those who had suicidal thoughts and those that didn’t; those words were ‘death’, ‘cruelty’, ‘trouble’, ‘carefree’, ‘good’, and ‘praise’.  Knowing which was the brain scan for a person with suicidal thoughts and which wasn’t, the researchers wanted to find out whether the machine learning algorithm was able to correctly identify the suicidal group based on the brain scan of the keywords alone.

Not only was the algorithm able to correctly identify the suicide group with 91% accuracy, it was also able to correctly identify with 94% accuracy which of the group previously attempted suicide.  Whilst the study was relatively small, it does point to a future where brain scans can be used to more accurately monitor the progress of treatment for high-risk individuals.  For example, identifying which individuals are hiding their suicidal thoughts might suggest to doctors that more intensive intervention is needed.  But as with the algorithms used by tech giants, this fMRI intervention can only be part of the mental health professional’s arsenal.  For those in low-resource settings, access to fMRI scanners is costly and limited, suggesting that the diffusion of this type of intervention will be dependent on an equitable and well-resourced health care system.

Photo credit: Carnegie Mellon University

Algorithms as part of a larger arsenal

People might think that a breakthrough is a completely new technology or scientific discovery.  But often times, it’s about the application of existing technology in a new area or field.  This is the case with algorithms for the prevention of suicides. 

There will inevitably be people with suicidal ideation who are not on social media that will therefore be missed by this technology.  That is why these tools should be seen as only part of a larger arsenal capable of preventing and treating suicide.  Nevertheless, using algorithms to predict and identify at-risk individuals, if done right, can be an immensely important.  This is because machine learning that has access to the millions of social media posts might be able to spot changing vocabulary more quickly than clinicians who work in controlled clinical environments and see only see a small sub-set of those with mental health problems.  Indeed, not all those with mental health problems want or even know they need clinical interventions.  So being able to identify these hard-to-reach individuals and these sometimes fast-moving trends would help clinicians intervene more early and perhaps more effectively.     

But suicide is not a data problem.  It is both an outcome of a complicated interwoven set of issues, and brutally simple final action.  Nevertheless, on that path to suicide there are descending steps, and those steps are events and information.  By better identifying and understanding those steps, mental health professionals can intervene and do what is necessary to avert a crisis.  Yes, suicide is not a data problem, but data can be part of the solution.  And if the downward spiral towards suicide is to be truly disrupted, we must take advantage of the true potential of predictive algorithms for mental health. 

Adrian Baker

About the author

I look at emerging technologies from a social science and policy perspective. I completed my PhD on the diffusion of innovations at University College London, looking at how innovation gets implemented in healthcare organisations. My main interests are on policies that encourage innovation and the diffusion of emerging technologies, and understanding their social implications so that everybody wins.

Close Menu