According to research, the weeks following discharge from a psychiatric hospital are a notoriously difficult time for frail people, with much higher than average suicide rates.
Now advances in artificial intelligence could change the rules of the game, helping psychiatrists predict suicide attempts and intervene in time.
AI and mental health
Machine learning, which employs computer algorithms to better predict human behavior, is a rapidly growing field in mental health. A growth that goes hand in hand with that of biosensors capable of monitoring a person's mood in real time by taking into account musical choices, facial expressions, vocal tone and even written posts on social media.
Matthew K. Nock, psychologist of Harvard and an attentive researcher of the phenomenon of suicides hopes to combine these technologies into a kind of early warning system to be used when an at-risk patient is discharged after a psychiatric hospitalization.
How the suicide system might work
A sensor signals that a patient's sleep is disturbed, the GPS shows that he is not leaving the house. The accelerometer on his phone shows that he is moving a lot, suggesting agitation. The patient reports a low mood on the usual questionnaires.
At this point, the algorithm flags the patient. And a doctor comes into contact with him through a phone call or a message.
Can it work? It is not easy to predict. There are several reasons to doubt that an algorithm will ever achieve this level of accuracy. Even among people at the highest risk, suicide is such a rare occurrence that any attempt at prediction can lead to false positives, with interventions on individuals who do not need it. And any false negatives, on the other hand, would carry an ethical weight on the legal responsibilities of having omitted an intervention.
Suicide prevention, it takes courage
Long-term granular data from large numbers of people is required to perfect artificial intelligence, and it is difficult to obtain, because fortunately suicide, terrible as it is, is a rare event. Not to mention the fact that the data necessary for monitoring raise enormous privacy issues, moreover on the lives of already vulnerable people.
Yet Nock doesn't give up. Something must be done, and courage is needed. “With all due respect to those who have been doing this work for decades, in a century we have learned nothing about how to identify people at risk of suicide and how to intervene,” he says.
The suicide rate today is the same as it was 100 years ago. So, it must be said honestly, we are not improving.
Matthew K. Nock
A difficult task
There is nothing more unnerving for a psychiatrist than caring for patients at risk of suicide while they are unsupervised and at home. It is a very complicated “grey area”. With experience, it only becomes clearer that suicidal thoughts can come and go without warning.
That's why more and more advanced healthcare systems are turning to machine learning to get ahead. Algorithms based on large data sets from electronic health records and numerous other factors are used to assign patients a risk score, paying more attention to individuals with exceptionally high risk.
Moreover, it seems that the algorithms have proved to be more accurate than the "traditional" methods, always the same for 50 years, according to a 2017 research. Right from 2017, the Department of Veterans Affairs in the USA it used an algorithm for report 0,1% of veterans at highest risk of suicide.
We are talking about a few thousand patients out of a population of six million.
Result? The enrolled patients to this program they were 5% less likely to attempt suicide. A drop in the bucket, which does not change the overall rates. The impression is that we have started doing things that have never been done before, but we have not yet found what we are looking for.
Looking for a pattern
"It's not easy to spot them," he says Nick Allen, director of the Center for Digital Mental Health at the University of Oregon. Allen helped develop EAR extension, an app that tracks mood based on factors like music choice, facial expression, and verbal tone.
A turning point could come from there. And back to Dr. Nock.
Last August a data scientist called Adam Bear he sat in front of a monitor in Dr. Nock's lab staring at zigzag graphs of a patient's stress levels over the course of that week. She was looking for a pattern: something that repeats itself, and can allow you to identify in advance someone who will attempt suicide.
For this, Bear spent the whole summer examining the data of 571 patients who, after thinking or attempting suicide, agreed to be monitored. During the six-month program, two of them took their own lives, and 100 tried to do so.
In summary: this is the largest reservoir of information ever collected on the daily lives of people struggling with suicidal thoughts.
What are the signs of a possible suicide
Nock's team is very interested in the days leading up to the suicide attempts, as this is the time when action could be taken. Some signs have already emerged: although suicidal impulses often do not change in the period leading up to an attempt, the ability to resist these impulses appears to decrease. With a recurring clue, sleep deprivation, which seems to be contributing a lot.
A small but very important signal that perhaps encourages us to insist. We have never been able to observe people with suicidal thoughts, because it is different from observing, for one thing, people with heart disease.
"Psychology," says Nock, "hasn't progressed as far as the other sciences because it didn't have many tools at its disposal."
With the advent of smartphones and wearable biosensors, however, we have the possibility of collecting so much data to make up for this disadvantage.
A faceless angel
One dilemma the researchers experienced in the ongoing study was what to do when participants showed a strong desire to harm themselves. Dr Nock decided they should intervene, even if this slowed the research (and for some patients it thwarted it).
“There is a downside to this,” says the scientist, “because paradoxically this results in fewer suicides and therefore less chance of finding a signal. But what if that patient was my son?”.
As a result, the study has also become an anti-suicide task force, and the interventions have become a routine part of life in the laboratory. If a patient came across a critical moment during monitoring, he would receive a phone call within 15 minutes from one of the investigators.
“We're kind of a faceless person, so there's less discomfort,” he says Narise Ramlal, a research assistant in the lab. But Dr Nock seeks to understand whether digital interventions could prove more effective.
“Many people don't want a human being to contact them when they're at high risk,” he said. “I don't want to say we're going to replace humans with machines, but sometimes they can be much more efficient than we are now.”
An emblematic story
In March, the 39-year-old Katelin Cruz he left his last psychiatric hospitalization with a lot of determination, but also fragility and fear. For this you have decided to participate in the monitoring. She was studying for a nursing degree when a series of mental crises shattered her life.
It was around 21pm, a few weeks into the six-month study, when a question popped up on his phone: “Right now, how strong is your desire to kill yourself?”
Without stopping to think, Katelin touched the screen: 10. A few seconds later, she was asked to choose between two statements: "Today I will definitely not kill myself" and "Today I will certainly kill myself". You chose the second sentence.
Fifteen minutes later, his phone rang. She was a member of the search group calling her, she had already alerted the police and kept Katelin on the line until the police knocked on her door: soon after, she had an emotional breakdown and passed out.
Whether or not it solves the problem, suicide technology can already do something: be there.
Katalin began promptly responding to the six “prompts” she received each day when the app on her phone questioned her about her suicidal thoughts. Notifications? Intrusive at first, then comforting.
“I no longer felt ignored,” she says. “Having someone who knows how I feel takes some of the weight off of me.” She thinks that technology (its “neutrality”, its “lack of judgment”) makes it easier to ask for help. “I think it's almost easier to tell the truth to a computer,” she said.
Many experts think it could be the other way around. Patients in crisis are adept at deceiving operators about their state of health, and with a computer it is also easier to lie. Much better people: a support group that meets every week, a circle of chairs, a network of friends, family and doctors.
Not everyone has it. In Italy, the waiting list for a therapist in a public mental health center can be up to three months. In the US it is eight months.
And tell me you don't want to die
Last week, at the end of the six-month clinical trial, Katalina filled out her last questionnaire with some sadness. She will miss the feeling that someone is watching her, even if she is someone faceless, from a distance, through a device.
There are times, very few, when a "Big Brother" can make you feel better. Would you ever have said that?
“Honestly, it made me feel a little safer knowing that someone cared enough to read that data every day,” Katalina says.
And I believe her.