Hey Siri, I’m feeling down: could smart speakers serve as on-campus mental health counselors?
As student depression and anxiety reach an all-time high, colleges and universities are looking for new ways to help students cope and some are turning to technology, raising serious legal, ethical, and social concerns
College students are known to be an overly stressed bunch, to begin with, but it seems that 2020 has magnified that anxiety.
Millennials, those born between 1981 and 1996, were once thought to be the most stressed-out generation yet, but it turns out the current generation of college students, those born between 1997 and 2012, better known as Generation Z, are even more anxious than their predecessors. This anxiousness is palpable worldwide.
Some have correlated this growing stress and anxiety on campus with cancel culture, which, among other things, can lead students to believe that when they encounter offensive ideas on campus, they are made weaker by that exposure.
Even before the worst year of the past few decades, colleges were overwhelmed with students seeking mental health support. Universities have generally been slow to respond to this rising need, with many institutions taking weeks to provide consultation for a burned-out coed. Making things worse, a recent survey found that the current situation has made it even harder to access mental health care on campus.
Universities should be interested in their students’ mental wellbeing not only because they ought to care about the welfare of their learners, but also because their continued existence depends on it. The most recent OECD data suggests that anywhere from one-third to two-thirds of enrolled students will eventually drop out of college before completing their degree and mental health issues reportedly contribute substantially to the overall dropout rate at institutions of higher learning.
Both colleges and parents fear that at-risk kids could fall through the cracks and not be treated in time, and this is a growing concern: university students are increasingly participating in dangerous behaviors, such as non-suicidal self-injury (NSSI)—defined as the deliberate destruction of one’s own body tissue, such as self-cutting and burning—binge-drinking, drug use, and even suicide.
Although some think that student self-reporting of these activities has led to exaggerated prevalence rates among college students, any intervention during these formative young-adult years is considered fundamental to long-term mental health, so it is better safe than sorry.
Given all of these concerns, many researchers are looking into technological solutions for this growing problem.
Some have even suggested using artificial intelligence, such as those that power popular smart speakers. These solutions are a bit ironic, as many see the prevalence of technology and social media as one of the main contributors to many of the current student mental health concerns on campus. Additionally, in many cases, mental health professionals have found it more difficult to get individuals to engage with technological tools than standard tools.
But, what if, instead of using smart speakers to engage the students, universities used smart speakers to listen to students and detect signs of mental distress? Clearly, the use of smart speakers to spy on students is fraught with ethical, legal, and social concerns. While universities can be seen as technologically progressive for providing smart speaker technology on campuses and in dorms, they might also be seen as overly paternalistic if they were using those speakers to listen in student conversations.
Smart speakers listening to you and me are nothing new and should come as a surprise to no one: like everything else in the modern digital age, you (and your information) are the product that technology companies are selling. They are constantly looking for ways to acquire more information about you.
The most common smart speakers listen in on our conversations, record them, and even play them to human operators who review them. Even when you delete your recorded conversations locally, it is possible that some corporations hold onto them, in perpetuity.
As far as the implications of these practices go, in one infamous case, a murder occurred in Arkansas while the defendant was listening to music on his smart speaker device. Amazon initially refused to provide any recorded information from that altercation, claiming that constitutional free speech rights protect those recordings. Two years after the incident, Amazon, eventually dropped this claim after the defendant agreed to hand over the data to the police.
Amazon’s decision, in this case, might imply that should the user agree to release private conversations recorded by Echo Smart Speaker devices, Amazon would comply with a request to hand them over, at least in some instances. Maybe even to its university partners.
There are significant hurdles to implementing an on-campus mental health system involving smart speakers. The collection of private personal information, either locally or by trained third parties, could result in serious privacy concerns as well as substantial regulatory oversight. Additionally, in the U.S., as in other jurisdictions, there are laws designed specifically to protect the privacy of student educational records.
In the U.S., the relevant federal law is the Family Educational Rights and Privacy Act (FERPA) and guidance documents for on-campus health professionals, regarding the use of mental health and other data, were recently released by the Department of Health and Human Services. Institutions also have to consider the possibility of lawsuits resulting from failing to act upon information to prevent harmful activities, including suicide. There is already a trend toward suing institutions of higher learning for failing in their duty of care to prevent student these acts, and if these institutions held recorded evidence that could have helped, but was, in hindsight, overlooked, it could expose them even further.
Even if a university was willing to accept the costly compliance costs and additional risk of lawsuits, students or possibly their parents would have to consent to this intrusion and the potential collection of very personal data. Moreover, it is likely that signage might even have to be placed in each relevant room to remind visitors that they might be recorded. Such signs may also raise concerns about stigmas associated with mental health, as they may assume a student is somehow at risk just because the smart speaker has been cleared to record in their room.
Alternatively, ongoing stigmas related to mental health might incentivize the use of smart speakers as it is already thought that these stigmas, although diminishing, still keep many students from actively approaching advisors on campus, making passive alternatives to getting help crucial.
If social, ethical, and legal concerns, such as stigma, privacy, and consent can be overcome, it is conceivable that trained mental health practitioners could devise lists of terms and clauses that would suggest that a student is depressed, involved in risky and destructive behaviors, or even suicidal. These smart speakers, triggered by predefined wake words, could record snippets of relevant conversations and then pass them on anonymously to on or off-campus mental health professionals. Only if the recorded conversation turns out to be relevant and actionable would the data be deanonymized.
It is even possible that such terms need not be employed by the academic institutions themselves. There are already a number of companies, including Amazon, that are working on developing technology to assess anxiety and depression from voice signatures. A mental health analysis could even be conducted via standard recorded smart speaker conversations relating to prosaic issues such as weather, music, and cooking.
We could even imagine a near future in which students actually discuss their mental health concerns directly with the smart speaker, within the privacy of their dorm room. Recent studies have shown that many within the younger generations are more comfortable talking to a smart speaker than to a human therapist.
Dov Greenbaum is a director at the Zvi Meitar Institute for Legal Implications of Emerging Technologies, at Israeli academic institute IDC Herzliya.
“Dov Greenbaum” Opinion “Mental Health” Students “Smart Speakers”