Linguistics Professor Philip Resnik (below) is testing whether machine learning could help uncover messages and subtle cues in social media postings that could indicate a user is at risk of committing suicide, giving mental health professionals a new tool to monitor patient well-being. (Illustration by Stocksy)
Amid the poorly lit food pictures, duck-lipped selfies and political disinformation that swamp social media, imagine something else: a hope for saving lives.
It’s an easy hypothetical for Philip Resnik to grasp. The linguistics professor, who holds a joint appointment at the University of Maryland Institute for Advanced Computer Science (UMIACS), is using machine learning techniques to determine whether suicide risk, depression or schizophrenia can be identified through analysis of a person’s social media posts—a mission partly inspired by his wife, clinical psychologist Rebecca Resnik.
American suicide rates have climbed dramatically in recent years—up 35% from 1999 to 2018, according to the National Institute of Mental Health. Resnik sees this as a chance for computer science to make a human difference.
“One of the early Facebook data science people channeled Allen Ginsberg and said, ‘The best minds of my generation are thinking about how to make people click ads,’” he said. “And then he pointed out, ‘That sucks.’”
Resnik and his team, co-led by Deanna Kelly, professor of psychiatry at the University of Maryland School of Medicine, have worked with collaborators to collect and anonymize donated social media content and symptom questionnaire responses, as well as data from Reddit. In addition, 60 people have participated in clinical data collection where they are evaluated by a mental health care professional and also consent to having their social media activity analyzed. In total, Resnik and his team, with funding from the MPowering the State initiative and Amazon’s Machine Learning Research Awards, have gathered information from more than 4,000 individuals, and are also analyzing donated data from more than 1,600 additional individuals specifically in connection with suicide attempts.
Working with electrical and computer engineering Professor Carol Espy-Wilson and computer science Assistant Professor John Dickerson, Resnik is creating models to help identify individuals most in need of intervention by zeroing in on expressions of suicidal thoughts, signs of crisis, reports of hallucinations, paranoid thinking and other signals of mental illness in their language and speech.
“In social media, some signals of mental health problems are very nuanced, though some of it is extremely explicit,” said Resnik. “You get people saying, ‘I don’t know why I should live,’ or you get people who are actually literally saying goodbye,” he said. “Simply looking at the data can be extraordinarily challenging.”
The goal of the project, still in the technology research stage, is to show that machine learning can predict who is most at risk and in need of immediate clinical attention. The data include information on which participants were shown to have the most severe mental health issues via traditional diagnostic techniques; Resnik’s ultimate goal is for the computational models to accurately predict who those individuals are so that they can receive more rapid attention from clinicians.
Eventually, Resnik anticipates, this technology could be deployed ethically alongside therapy, medication and other treatments in a clinical setting, helping clinical professionals monitor their patients in between visits.
“The idea would be to identify a population of providers, a population of their patients, and do this for those who have opted in, in such a way that it adds to their care, adds information to improve early intervention, while carefully making sure there was no way that it could take anything away,” said Resnik.
Original news story written by Sala Levin