A Nov. 12 Newsweek story focuses on three University of Maryland faculty researchers who broadly work in engineering systems for mental health: Professor Philip Resnik(link is external) (Linguistics/UMIACS), Professor Carol Espy-Wilson (ECE/ISR) and Assistant Professor Monifa Vaughn-Cooke(link is external)(ME).
“Technology’s Latest Quest: Tracking Mental Health,”(link is external) by Stav Ziv, explores the trio’s work in developing technology to help people keep better track of their mental health outside of formal treatment or between therapy sessions. They are building a complete set of physiological markers like heart rate and skin temperature, along with patterns based on vocal features, facial expressions and language, that could be tracked via a smartphone app or a device like a Fitbit.
Mental health metrics lag behind capabilities for measuring other types of health problems, such as high blood pressure or glucose levels in blood. It’s only recently that researchers are beginning to link certain types of data that people can’t identify in themselves to mental health. These new data sources could radically improve the ability to track mental health.
Espy-Wilson started by looking at the Mundt database, collected in a study from 2007 that looked at depression and speech patterns. Espy-Wilson’s study built on those findings. She looked in particular at six patients whose assessments showed the greatest variation in mental well-being week to week. She found that when they were depressed, their speech tended to be slower and their vowels “breathier,” and that their voices’ “jitter and shimmer”—a measure of variability in duration and amplitude of sound—increased.
The resulting paper by Espy-Wilson and her graduate student Saurabh Sahu, “Effects of Depression on Speech,” was presented at the 168th meeting of the Acoustical Society of America (ASA), October 27-31 in Indianapolis. The ASA issued an informative news story about the research here.(link is external)
Vaughn-Cooke is running a study with healthy participants, which she’ll later repeat with others who have been diagnosed with depression, prompting them with questions like “How was your day?” and “What was the saddest part of your day?” The responses are recorded with both video and audio. The former will be analyzed for emotion using facial recognition software, while the latter will be sifted through to identify vocal patterns as well as transcribed and analyzed as text.
Resnik has been looking for “signals in language use that help produce insight into people’s mental health status,” he told Newsweek. His goal is to connect speech or writing, whether it’s an essay or a tweet, to something that can identify problems with a person’s mental health, such as the Hamilton Depression Scale.
“Once [we start] understanding how all of these different predictors relate to each other, we can then develop algorithms to better predict when a depression patient is going into a relapse,” Vaughn-Cooke says. This will “not only improve quality of life but also reduce incidence of suicide, relapse and readmission to treatment facility.”
Ideally the app would passively collect information while the person is going about their day, perhaps occasionally asking them to record responses to questions.
“We want people to get the kind of attention they need when they need it,” says Espy-Wilson.
November 14, 2014