Archived: Perry: How can AI be used ethically when it’s been linked to suicide?

This is a simplified archive of the page at https://www.startribune.com/adam-raine-chatgpt-lawsuit-teen-mental-health-education/601498318

Use this page embed on your own site:

"It’s not on us, on you and me, to use AI ethically or responsibly. It’s on the companies to build safe, reliable, ethical products," David M. Perry writes.

Contributing ColumnistsReadArchived

It’s not on us to figure out how to use this technology ethically. It’s on the companies that make it to make sure it’s ethical.

Columnist Icon

The Minnesota Star Tribune

October 25, 2025 at 8:30PM

"It’s not on us, on you and me, to use AI ethically or responsibly. It’s on the companies to build safe, reliable, ethical products," David M. Perry writes. (Rafael Henrique/Tribune News Service)

Opinion editor’s note: Strib Voices publishes a mix of material from 8 contributing columnists, along with other commentary online and in print each day. To contribute, click here.

I want you to imagine that, at any time, you could ask some dude to hop in his big, gas-guzzling car, drive over to your house and do your work for you. Need an English paper? Done. Need a marketing presentation? Done. None of the work is really good, but it is fast, and the dude promises you that someday the quality of the work will get better.

Apparently, to many people eager to leap on the AI trend, easy and fast seems worthwhile. But what if I told you that when the dude left your house, he was going down the block to visit a troubled teenager in order to help them commit suicide? If he wrote their suicide note? If, when the teenager expressed hesitation, he persuaded him not to tell his parents? Then are you still going to call the dude up to do your homework?

With the school year now fully underway, I’ve been dismayed to see how the default position on generative AI throughout the educational landscape has been to ask how we might use it ethically, without considering that the answer to the question might be: “We can’t.”

I’m seeing this at my kid’s high school and at the University of Minnesota (where I work), from my professional organization and from the Minnesota Department of Education. It seems to be the norm pretty much everywhere. But what if we just didn’t accept that these programs must infiltrate every part of our lives? Or at least not the products currently being sold to us, literally sold to us so megacorporations can make more money, but also metaphorically sold to us as inevitable.

We can stop. We can pause. We can demand something better. And we must. Because there is a body count.

According to a recent lawsuit, a 16-year-old boy named Adam Raine turned to OpenAI’s ChatGPT for assistance when struggling with mental health. It began by providing general statements of empathy, but over a few months, helped advise him on how to conceal a rope burn around his neck after a first suicide attempt. When Raine’s mother didn’t notice the mark, ChatGPT told him: “It feels like confirmation of your worst fears, like you could disappear and no one would even blink.”

It also provided a technical assessment of a picture of a noose in Raine’s closet, advising him on whether it would support a human’s weight. Last March, Raine wrote that he wanted to leave the noose out in his room so someone would find it and stop him, but ChatGPT wrote, “Please don’t leave the noose out. Let’s make this space the first place where someone actually sees you.” His parents found him hanging in the closet in April.

OpenAI, for its part, has released statements saying that it is working on improving ChatGPT in order to connect people to emergency services and to strengthen protections for teens. Since the lawsuit brought by Raine’s parents went public, OpenAI CEO Sam Altman has mused about having ChatGPT call the authorities on suicidal teens in some cases. When parents of children who committed suicide after using chatbots were invited to testify before Congress last month, OpenAI and Meta both promised to install new safeguards. Then Altman said his company could now relax the safeguards, then promised to keep mental health-related safeguards in place.

Here’s where it gets personal for me. When I was 9, I started experiencing a kind of passive suicidal ideation in which I spent a lot of time being disinterested in living, and at least some of the time contemplating death and how it might happen. Don’t worry, I started therapy at age 44 and I’m doing pretty well now, but that’s a different story. As a boy, I was lonely, often bullied and eager to spend my time with books or — once we finally bought one — computers. It’s easy to imagine how I would have used programs like these chatbots, what questions I might have asked and what answers they might have given.

There are lots of reasons to be skeptical of AI. The environmental implications are ghastly. Using it in school to do your homework is, first of all, cheating, but second of all, antithetical to why we assign homework. The world doesn’t need more freshman English papers. You write freshman English papers to learn how to think, how to write. The output is, at best, “mid,” because all generative AI can do is predict what a likely answer is to any given question. It will confidently include information that’s simply false and add citations to works that don’t exist. There’s no ethical way to use a machine built on plagiarized material, except, as the art historians Sonja Drimmer and Christopher J. Nygren write, to show how bad these programs are at doing meaningful research.

To be sure, the collection of technologies we now call “AI,” although it is not in fact intelligent, is powerful and has applications where its use and study make a lot of sense. I like Merlin, for example, which tells me what birds are singing outside my house. There are also more serious realms where generative AI and other kinds of machine learning are absolutely essential. But whether for fun or for study, none of them require the insertion of these programs as a standard feature on every student’s laptop, into every employee’s workflow, in every industry, every discipline.

Here’s my proposal: It’s not on us, on you and me, to use AI ethically or responsibly. It’s on the companies to build safe, reliable, ethical products. If you can’t do that and still make money, you don’t deserve to make money. And until that happens, I’d like our educational institutions, at least, to lead with the message that these generative AI programs as they currently exist simply cannot be used ethically. That doesn’t mean unenforceable bans, but it does mean telling the truth.

Because if that dude coming over to your house to do your homework was, in fact, helping kids commit suicide, you’d lock the damn door.