The numbers speak for themselves: suicide is the twelfth leading cause of death in the world (the second leading cause in the 15-29 age group). Every year, about one million people die from suicide: in practice, one person every 40 seconds. There is more hope from technology now: a team of researchers has developed an artificial intelligence system that promises to change the way we approach suicide prevention.
The VSAIL system: a digital ally for prevention
The specialised farming model Vanderbilt Suicide Attempt and Ideation Likelihood (VSAIL), developed at the Vanderbilt University Medical Center, analyzes information from electronic health records to calculate the risk of attempting suicide in the next 30 days.
Preliminary tests have shown significant results: one in 23 patients identified by the system subsequently reported suicidal thoughts. This is a first piece of data to be corroborated with subsequent tests, but it already shows a correlation between the data and the predictions.
The Importance of Real-Time Alerts
The study, published in JAMA Network Open (I link it here), compared two different approaches: automatic pop-up alerts that interrupted the doctor's workflow and a more passive system which displayed risk information in the patient's record.
The results were surprising: the interruptive alerts led to risk assessments in 42% of cases, against only 4% of the passive system. This element is also worthy of attention.
A targeted and efficient approach
“Most people who die from suicide saw a health care provider in the year before the death, often for reasons unrelated to mental health,” he explains Colin Walsh, of the study group. “But universal screening isn’t practical in every setting. We developed VSAIL to help identify high-risk patients and prompt targeted conversations.”
The system reported only 8% of all patient visits for screening, making implementation more manageable for already overworked clinics.
Suicide prevention, the results of the pilot study
The research involved 7.732 visit of patients over six months, generating 596 alerts of total screening. During the 30-day follow-up period, no patients in the randomized groups experienced suicidal ideation or attempted suicide, according to the records of the VUMC.
One critical issue that emerged from the study was the potential for “alert fatigue”—when physicians become overwhelmed by frequent automated notifications. Interruptive alerts, while more effective, could contribute to this phenomenon. The researchers suggest that future studies explore this further.
Future prospects for suicide prevention
“Healthcare systems must balance the effectiveness of interruptive alerts against their potential downsides,” he says. Walsh. “But these findings suggest that automated risk detection combined with well-designed alerts could help us identify more patients in need of suicide prevention services.”
Technology thus becomes a precious ally in the fight against suicide, but obviously it does not replace the human element: it remains a tool in the hands of trained professionals, who can use it to save more lives.
The future of suicide prevention lies right here: in the union between human expertise and computational power.