Chat bots as therapists. Apps that can track your mood and behaviour, offering earlier detection of psychiatric illness.
There's a lot happening at the nexus of mental health and AI – some of it promising, some of it perilous.
Today, science writer and author of The Silicon Shrink, Daniel Oberhaus, takes us on a journey through this emerging space. We'll look at the potential benefits, the major limitations and the ethics of adding AI into our mental health mix.
And just a heads up, this episode touches on the topic of suicide, please take care while listening.
Guest:
Daniel Oberhaus
Science writer
Author, The Silicon Shrink: How Artificial Intelligence Made the World an Asylum
Credits:
- Presenter/producer: Sana Qadar
- Senior producer: James Bullen
- Producer: Rose Kerr
- Sound engineer: Isabella Tropiano
Extra info:
- Meta preys on insecure teens, whistleblower says - The Australian
- Meta whistleblower Sarah Wynn-Williams says company targeted ads at teens based on their ‘emotional state’ - Yahoo Finance
- Comments on Research and Ad Targeting - Meta
Resources:
Credits
Image Details
What happens when you combine AI with psychiatry?(Getty: Eoneren)
Sana Qadar: Hey, just a heads up, this episode touches on the topic of suicide. Please take care while listening.
Sana Qadar: Exactly. Where in New York are you?
Daniel Oberhaus: Well right now I'm in the Financial District but I live in Brooklyn.
Sana Qadar: This is Daniel Oberhaus. He's a former journalist at the technology magazine Wired and...
Daniel Oberhaus: I am the author of The Silicon Shrink which is a book about the promise and perils of the use of artificial intelligence in psychiatry.
Sana Qadar: You open the book with the story of your sister and I'm wondering if we can start there as well if you don't mind talking about that. Could you tell me a little bit about your sister?
Daniel Oberhaus: So my sister in kindergarten when she was about five years old she had a series of traumatic events based around bullying. You know, truly broke her. It was, you know, shocking to see as a as a child myself going through this.
Sana Qadar: Daniel says this experience of intense bullying at such a young age was the beginning of his sister Paige's lifelong struggles.
Daniel Oberhaus: She received a variety of diagnoses throughout her life, you know, she struggled with various eating disorders, problems with drug abuse, but then she was also in and out of these facilities that were either inpatient or outpatient and, you know, ultimately meant to help her.
Sana Qadar: But nothing seemed to help.
Daniel Oberhaus: What was actually happening is she was being given all these diagnoses and these different drugs and she couldn't really feel any difference, you know, neither could our family, like it clearly wasn't helping. And in many cases it actually made things worse. And so when you look back on that you you can't help but wonder it's like did this do more harm than good?
Sana Qadar: In 2018 Paige took her own life.
Daniel Oberhaus: A few weeks after her 22nd birthday, and this was, you know, obviously a really terrible event for our family.
Sana Qadar: In the aftermath of Paige's death, Daniel was tasked with going through her digital belongings. He was working as a science and technology reporter at Wired at this time, and as he looked at what was left of Paige's data, his mind drifted towards how technology could have possibly helped his sister.
Daniel Oberhaus: When someone takes their life, they leave a lot behind them. Most of us interact with technology on a daily basis. And so there's lots of direct data in terms of what's being written down, what's being posted on social media. But then there's also a lot of metadata that can kind of give some insight into maybe how a person is feeling or, you know, how they're behaving, which is particularly important in the context of mental health. And so I started thinking about this out of, I guess, personal curiosity given my profession. And the more I started digging into it, the more I realized that not only is artificial intelligence being applied in psychiatry, I think more than much people know, but it's also based on very, very shaky foundations. So, you know, it started with a really kind of dark event in my personal life and then let down a rabbit hole that resulted in this book.
Sana Qadar: So what exactly are those shaky foundations as Daniel sees them? This is All in the Mind. I'm Sana Qadar. Today, science and technology reporter Daniel Oberhaus takes us on a journey through AI in psychiatry, looking at its potential benefits, its major limitations and the ethical debates it can spark. And we've covered chatbots and their use in therapy and mental health on All in the Mind before, most recently a few weeks ago. This episode is a skeptic's take on the trend.
Sana Qadar: So let's begin with the landscape right now. Can you describe where we're at right now with the application of AI in psychiatry and mental health more broadly?
Daniel Oberhaus: So I think what a lot of people might not realize is that the history of the use of artificial intelligence in psychiatry is decades old. It actually stretches back to the very beginning of the field of artificial intelligence. Some of the earliest experiments in AI in the 50s and 60s were with psychiatric applications in mind, and they were already beginning to develop we might consider therapeutic chatbots today as early as the 1960s and trying to connect them to the early versions of the Internet. So these fields have always been deeply intertwined and their histories really kind of play off of each other in very important ways. But at a broad level, you can kind of think of this as having a few different applications. One is artificial intelligence in psychiatry for the purpose of diagnosis. A second one is for the purpose of treatment. And then a third one is for the purpose of research. So actually trying to better understand these things that we call mental disorders. And so for a long time, this was largely used in research settings. And to give you an example of this, there was a famous study in Denmark in 2010 called MONARCA. And it was one of the first kind of examples of what is now called digital phenotyping, where they were looking at a cohort of patients with diagnosed bipolar disorder. And they were trying to see if they could just look at, you know, things like their call records, you know, their geolocation data, the number of texts they're sending and receiving. Not even necessarily looking at what they're saying, but like using this as like a method to actually monitor patient behavior. To see that if it was possible to predict the onset of, you know, patient crises just purely based on kind of their digital exhaust.
Sana Qadar: Daniel explains that the results of this initial study in 2010 were promising, showing that the behavioral data the researchers collected from patients' smartphones correlated with the severity of symptoms. And the data could be used to differentiate when patients were in a depressive versus manic state. But in a follow-up study launched in 2014, the results were more disappointing.
Daniel Oberhaus: And so that's one example of a research paradigm that, you know, started about 15 years ago. But then now that has increasingly started to bleed over into the private sector. And so until very recently, there was a company called MindStrong, which is to this date still one of the most well-funded psychiatric AI startups ever. It was run by the former head of the National Institute of Mental Health in the United States. His name is Thomas Insel. And they were trying to do the exact same thing, but in a commercial setting. So it was something that you could get a prescription for and it would monitor your behaviors on your phone to try to lead to early intervention. There was a psychiatrist in the loop, but at the end of the day, it was about trying to identify these things very early on.
Sana Qadar: But in 2023, MindStrong ceased operations.
Daniel Oberhaus: And so there's a lot happening on both of these sides of the coin. You know, there's actually a very interesting study that came out of Australia from a researcher at CSIRO, the Commonwealth Scientific and Industrial Research Organization, where he had developed a system that was meant to diagnose bipolar patients during the patient intake process at a clinic. And the way the system worked is patients would essentially just click on a colored square. And there was no objective to this game. They were truly just kind of clicking like one box or another based on its color. The colors were kind of randomly cycled. But like from that data, just from these patients clicking on boxes, they were then able to actually more accurately diagnose someone with bipolar disorder, which is a major problem within psychiatry because most bipolar patients are misdiagnosed for years. They're typically diagnosed with major depression. And so this was actually able, this AI system that just involved a human clicking on colored boxes without really any other context, was actually able to more accurately predict a diagnosis of bipolar than a human clinician.
Sana Qadar: Which is pretty incredible. But that study, which was published in 2019, seems to have been a one-off. It involved 101 people. And while the results were promising, the researchers acknowledged that more work was needed to see if that program could work in real-world settings. But when I went hunting for follow-up studies, I couldn't find any. So again, an idea with a lot of promise, but that's kind of where the story ends. For now, at least. But one area where we are seeing a lot of movement is with chatbots.
Daniel Oberhaus: AI chatbots that function as essentially cognitive behavioral therapists. So the famous example of this is Woebot. It's one of the most popular and well-used ones. But there's dozens of these tools that are currently out in the world.
Sana Qadar: Alright, well let's talk about some of the promise and perils of all of this. Starting with the promise, which you've touched on a bit already. But what is the promise of AI in psychiatry? What are some of the problems it could solve?
Daniel Oberhaus: So one of the biggest problems in psychiatry, and it's been this way basically since psychiatry became modern about 150 years ago, is accurately diagnosing patients and then being able to prescribe treatments that actually improve their outcomes. And what's challenging about mental disorders in particular is that when you open the DSM, which is a standards book that is widely used in psychiatric practice to diagnose, the way that book is constructed is each mental disorder has a collection of symptoms. And if a patient presents a certain number of those symptoms and cross a threshold, the psychiatrist can then make a diagnosis. It's functionally very different than the rest of medicine in this regard, because you can't look at someone's brain and say they definitely have depression. So the promise of AI is that all of a sudden anyone working in psychiatry all of a sudden has access to behavioral data, emotional data and cognitive data just as a result of us interacting with digital devices. So the things we're typing in a computer, they can track your behavior to kind of see like depressed people tend to, for instance, like stay around the house much, they might not be leaving.
Sana Qadar: So data collection through AI can offer a lot more detail about someone's behavior. It also offers around the clock data, not just what a psychologist or psychiatrist might observe during an appointment.
Daniel Oberhaus: And the hope is that, one, this will help more clearly define what it means to have a given mental disorder. So these symptoms now become much more well-defined. And then on the second side of that, if you're now able to map patient behavior, the hope is that this can also flag people who might currently have a mental disorder or might be in the process of developing one so that they can get early intervention, which for a lot of these disorders, like for instance, like with people like my sister, one of the best ways to prevent suicide is through timely intervention. Time does matter. And so the hope is, you know, not only will we get a better understanding of what we're treating, we will also be able to enhance our ability to treat through earlier diagnosis. So it's a problem that psychiatry has been trying to solve forever. And, you know, it's a tale as old as time, though, when you look at the history of psychiatry, these kind of miracle interventions that are supposed to solve everything usually tend not only not to do that, but to create a huge host of problems that have very long tail effects.
Sana Qadar: Yeah, because it sounds like a lovely promise. It's very hopeful. What is the peril of all of that?
Daniel Oberhaus: Yes. So there's several ways to answer the question of what is the peril of using artificial intelligence in psychiatry. The first one has to do with patient rights and autonomy. And what's interesting about a lot of these tools, especially, for instance, chatbots, is unlike when you go to a human clinician in the United States, our clinicians are subject to very strict regulations about how they can collect, store and share patient data. It's a law called HIPAA. I'm sure there's very similar ones in Australia as well. And that patient data is considered very confidential. There are certain reasons why a psychiatrist might be legally obligated to inform outside third parties, such as if the patient threatened harm to someone else or themselves. But for the most part, that data is subject to very strict protections. But these AI systems are not. They are not medical devices. They are not subject to the same restriction of patient data. And, you know, in the book, I go into several examples of how this data has been abused. I mean, there were crisis hotlines that people were calling into when they were having suicidal ideations that then flipped that data to train AI for call centers. And there's several examples of these systems getting hacked and very, very sensitive patient data getting leaked.
Sana Qadar: Another risk, Daniel argues, is that eventually people's autonomy over their health care could be violated.
Daniel Oberhaus: So, like, at a certain point, you can imagine that in the same way that this explosion of diagnoses came from insurance providers demanding diagnoses and pharmaceutical companies demanding diagnoses. As these tools begin to get more developed, it's not far fetched to imagine that soon providers, insurance providers, are going to demand that these systems are running on people's phones that are, you know, diagnosed. So now all of a sudden you have this thing that you might not have consented to, but in order to get treatment, you are now mandated to.
Sana Qadar: Mandated to have all of your behavior, you know, how much you use your phone, how much you're leaving your house, all of that monitored?
Daniel Oberhaus: Right. And I think one of the motives for writing this book ultimately was that it doesn't just stop with violating patients' rights to privacy and autonomy. It actually ends up applying to all of us. And the reason for this is that these AI systems, the way they're currently built, they basically thrive on data intake. And so they feed on massive data sets. But at the same time, we also don't really know what data is important. So right now these systems are being developed in a way where all patient behavior should be monitored because we actually don't know which is going to correlate best. We don't, can't answer that question. So we want to take up as much of this as possible. The challenge though is if the one of the true promises of these systems is being able to identify people who are not currently receiving treatment or who are maybe developing a crisis is in order to have that level of specificity and sensitivity where you're finding the people who need the help and also not diagnosing the people who don't. That needs to be broadly applied. This is something that needs to run on everyone's computer, even if you don't have a mental health issue at the moment, because it's based on this idea that someday you might. And on that basis alone, you should have this thing running for your own good. And it's like, well, what if I choose not to? The problem there is, is that you increasingly don't have that option. So there's a lot of these tools that are now being deployed in workplaces and prisons and government offices to run on employee computers all the time. And, you know, Apple is developing digital phenotyping tools. Meta has suicide prevention algorithms running in the background. Like a lot of these things already exist on a lot of these platforms, whether you know it or not. And that just doesn't stop. So it basically has the ultimate effect of turning us all into patients or potential patients and using that as a justification for massive intrusions into our personal lives. So that's the peril from a really broad aperture.
Sana Qadar: Obviously, Daniel's examples have to do with what's happening in the US because that's where he's based. But given that the US usually leads in these kinds of things, it's useful knowing what's happening over there and what might come downstream over here.
Sana Qadar: You're listening to All in the Mind. I'm Sana Qadar. Daniel Oberhaus is a science and technology reporter, and he's been investigating the role of AI in psychiatry in his new book, The Silicon Shrink.
Sana Qadar: I just want to ask more about AI chatbots as well. You talked about Woebot earlier. Just from an instinctual kind of thing, I feel like the idea of getting therapy or advice from a chatbot to me feels like it just wouldn't be effective. The whole point of that exercise is to have a human interaction. But, you know, maybe I'm very old school on that. Are these kind of bots effective at all in helping people?
Daniel Oberhaus: Well, it's interesting that you say that because I think you would get along with a man named Joseph Weizenbaum, who is famous for creating a computer program in the 1960s called ELIZA. ELIZA played the role of a therapist.
ELIZA (archival dramatisation): Hello, Lindy. Please tell me your problem.
ELIZA participant (archival dramatisation): Well, Doctor, people never laugh at my jokes.
Daniel Oberhaus: You know, you could sit down at a terminal and talk to it, and it would basically spit back your answers in the form of questions.
ELIZA (archival dramatisation): When did you first realize that people might not...
Sana Qadar: This is a dramatisation of how the program worked that was aired on the ABC back in 1985.
Daniel Oberhaus: And so it was framed as a therapist by this man named Joseph Weizenbaum who built it simply as like kind of a clever hack. It was just like an easy way to get around human computer interaction. He didn't ever intend for it to be applied as a therapeutic tool. But then once people started using it, that's actually how they were interacting with it. And he, like to the end of his life, was horrified by this.
Joseph Weizenbaum (archival): I was just appalled. I was just absolutely appalled.
Sana Qadar: This is Joseph Weizenbaum from that same program in 1985, explaining his feelings about how people were interacting with ELIZA.
Joseph Weizenbaum (archival): It bothered me not so much from a computer science point of view, but it gave me some insight about how desperate loneliness is in our society, that when people can't find someone to talk to, that they have to turn to a machine and all that sort of thing.
Daniel Oberhaus: He thought it was just like one of the worst things that could possibly have happened because he saw it as incredibly dehumanizing and that there had to be this human element within the therapeutic process in order for it to be effective. And so your perspective is, you know, I think shared by a lot of people. What's interesting about chatbots is the chatbots are kind of the original AI psychiatry application. Like this has been around for 60 plus years now with people thinking about this. And what's interesting about the chatbots that are available today is most of them are based around cognitive behavioral therapy principles, which is meant to basically help patients improve maybe misaligned ways of thinking or improve behaviors that have a negative outcome on their life. And so it's actually a very scripted process, even when you're doing it with a human clinician. And what's kind of very convenient about CBT is that because it kind of follows this more scripted nature, it's relatively easy to implement it in a chatbot because they can kind of follow these best practices scripts that might be also used by a practicing clinician. So that's why most of them are CBT. And I think the good news about CBT is it's actually the most effective therapy that we know for a broad range of mental disorders. Like the data is very unambiguous in this point that like CBT based talk therapy works. What we don't have is whether that the data on whether doing it with a chatbot has the same improvement in patient outcomes. We know that talk therapy with a human does. We cannot yet say one way or the other if it does with a robot.
Sana Qadar: And the other thing that worries science and tech journalist Daniel Oberhaus is how opaque the inner workings of these chatbots are.
Daniel Oberhaus: I think most people that I talked to today have had some exposure to chat GPT or similar large language model at this point and kind of understand what it is. But they, at least from my experience, might not understand really how it works. Like they tend to actually, if anything, expect more from it. And it kind of feels like magic. Maybe there is an intelligence on the other side. But at the end of the day, it's just a statistical pattern matching tool. And because of the way these things are built, we actually really don't know that well how it gets to, you know, from input to output. This is black box phenomenon with these really large models. And so when we think about using these types of AI systems in the context of psychiatry, especially when we come to diagnosis and treatment, if we are going to prescribe a course of treatment or a medication to a patient using one of these systems, it is terrifying to me the idea that we cannot interrogate how that system arrived at that recommendation. And so we, in order for this to be effective, we not only need to have it recommend things that work, we also need to have it be able to explain how it arrived at that answer. And to the extent that we don't have these types of explainable algorithms, it's very difficult to trust their output. Like at a certain point, the psychiatrist, you know, they can take it as an input to their judgment. But what we've seen in other fields, you know, there's been a lot of study in airline pilots, for instance, when you introduce automation into a field, what ends up tending to happen is that people become extremely reliant on those automated systems to the point where they begin to lose their original skills. So with pilots, it resulted in, you know, being able to fly less well by wire because they were so used to having these autopilots on. And in psychiatry, we run the risk of doing the same thing where it's like even if these things are only kind of implemented as a recommendation tool in the clinic, there will be a habit, and we can just see this in data from other fields, to begin to trust that system a little bit too much. But if we don't know how the system is arriving at its recommendations, now all of a sudden the psychiatrist has usurped a very important part of their job, which is that clinical judgment. And so if we can't actually work backwards and say, does this make sense within this patient context? And can you explain to me why this would be the best outcome? It's very dangerous, I think, to start implementing these, especially when it comes to treatment. Like diagnosis is one thing, but if we are going to prescribe something to someone, whether that's a behavioral regimen or any other sort of therapy intervention, these machines really need to be able to explain how we got there and why that course of treatment should be pursued.
Sana Qadar: Yeah, I mean, you would hope that psychiatrists, if this started to be used in that way, human judgment would be the thing that overrules it. But I get your point about, you know, becoming reliant on it. How likely and close do you think that scenario is? Like how much of a hypothetical are we talking about or not?
Daniel Oberhaus: Yeah, I mean, it's a difficult question to answer because these systems are already out in the world. You know, just the other day I was talking with a reporter who was working on a story about suicide prevention algorithms in schools. And so in the United States, you're starting to see these things be implemented on basically kindergarten through 12th grade students' computers. Many times, you know, it's without their knowledge that they have these systems running in the background that are nominally to, you know, do early interventions. And what we started to see is that this often just leads to a lot of unnecessary police contact. And like, it's, you know, it's not an unambiguous good to implement these. But, you know, the answer to that is always, well, maybe it'll save one person's life and that justifies everything else that comes with it. And so this is very real in the sense that these things are already out in the world. There's tons of investment money flowing into them every single year. There's a voluminous research happening in this area. So it's not going away. And what we're also starting to see is that as AI tools that were never intended for therapeutic purposes are released into the world, they begin to be used that way naturally by people. And so one of the motivations for this book was to kind of just raise that flag and say, hey, we should really start talking about this and taking it seriously because of how fast it is moving out into the world.
Sana Qadar: So what would be your advice then for people trying to navigate what's out there right now in terms of chatbots and therapy, whether that's using chat GPT for therapeutic purposes or some of the other chatbots that have been designed more specifically for this?
Daniel Oberhaus: Yeah, my overarching advice is to avoid it. And I say that with the caveat of being, you know, everyone is in the same boat here in the sense that the reason why people seek these tools out is because they're not finding relief elsewhere. And so everyone is entitled and I think should make their own decision. But I don't think that there's very good information available to most people who are using these tools about how they actually work, the risks they're taking, whether they know it or not, in terms of like the way their patient data is being used and like how these systems can actually mess up that sort of thing. So I think my one piece of advice would be if you're going to use them, do your homework and, you know, understand what you're getting yourselves into and its limitations.
Sana Qadar: And while we're on the topic of AI, tech companies and mental health, here's a related troubling example of what else can happen when these worlds collide. Facebook's former director of global policy, a woman named Sarah Wynn Williams, who recently authored a book you might have heard of about her time at the company called Careless People. She recently gave whistleblower testimony at a US Senate inquiry alleging that Meta, which is Facebook's parent company, had targeted 13 to 17 year olds with ads when they were feeling down. Their algorithms, she said, could identify when a child was, quote, feeling worthless or helpless or like a failure. We don't have audio of this testimony, so I'll just give you one more quote because she went on to say, quote, So what the company was doing was letting these advertisers know that these 13 to 17 year olds were feeling depressed and saying, now is a really good time to serve them an advertisement. Or if a 13 year old girl would delete a selfie, that's a really good time to try and sell her a beauty product. End quote. This, surprisingly, hasn't been reported many places, but The Australian newspaper has been covering it. And in fact, they first reported some of these details back in 2017. Meta has denied the allegations and the testimony given by Sarah Wynn Williams, calling it divorce from reality and riddled with false claims in an online statement. The company has also taken action to stop the author from promoting her book, calling it false and defamatory. And in a 2017 statement in response to The Australian's reporting, it said Facebook does not offer tools to target people based on their emotional state. We'll link to these stories and statements on our website.
Sana Qadar: And looking back now to your sister and what happened with her, do you think AI could have been helpful in her care at all?
Daniel Oberhaus: I don't know. My suspicion would be it's a strong no, simply because these systems don't work as advertised. And I also feel regardless of whether or not it could have, you know, I honestly don't know if she would have wanted it. And so at the end of the day, I do believe very strongly in a patient's right to dignity and autonomy and privacy. And so even if it could have, it's like I still think it would be up to her whether or not she should use it.
Sana Qadar: And did writing this book help you process your sister's death at all or help you feel like you were maybe honoring her memory and helping other people?
Daniel Oberhaus: Yeah, I mean, there are going to be hundreds of thousands of people who killed themselves this year. And I feel for every single one of their families who has to deal with that, it's something that no one should ever have to experience. And so writing this book was a really interesting process in the sense of it was, I think, an act of grief and healing as part of that. But it was also trying to take as frank of a look as I could at this issue and saying, you know, as much as I wish that Paige were still alive. I don't think that this is a good justification for running forward at this at any cost, that there's just too much on the other side. We don't know if it would have helped her. And we don't know if it will help any of the other people who are going to take their lives this year. And we should do everything in our power to try to help these people. And my hope is that the only reason my sister was brought into the book at all was to just really underscore the gravitas of all of this. Like we are dealing with people in severe distress who are experiencing immense amounts of pain. And nothing in this book is meant to diminish that. It's more of just asking is like, do we know the nature of this pain? And is the thing that we're trying to use to heal them actually doing more harm? And so, yeah, it definitely did help, I think, with the healing process. But my hope is that it just kind of underscores how serious this topic that we're talking about really is and helps other people who might be in similar situations.
Sana Qadar: That is science and technology reporter Daniel Oberhaus. And his new book is called The Silicon Shrink, How Artificial Intelligence Made the World an Asylum. And that's it for All in the Mind this week. Thanks to producer Rose Kerr and senior producer James Bullen. Our sound engineer this week was Isabella Tropiano. I'm Sana Qadar. Thanks for listening. I'll catch you next week.