ChatGPT is used by the application of mental health technology to experiment with artificial intelligence with users

When people log into Koko, an online emotional support chat service based in San Francisco, they expect to exchange messages with an anonymous volunteer. They can seek relationship advice, discuss their depression, or find support for just about anything else — a kind of free digital shoulder to lean on.

But for a few thousand people, the mental health support they received wasn’t entirely human. Instead, it is augmented by bots.

In October, Koko conducted an experiment in which GPT-3, a The newly popular AI chatbotHe wrote the responses either in whole or in part. Humans could edit responses and still hit buttons to send them, but they weren’t always the authors.

Robert Morris, co-founder of Koko, said that about 4,000 people have received responses from Koko written at least in part by AI.

The experiment on the small and little-known platform has sparked intense controversy since he revealed it a week ago, in what could be a preview of more ethical controversies to come as AI technology makes its way into more consumer products and health services.

Morris said in an interview with NBC News that Morris thought it was a worthwhile idea because GPT-3 is often fast and eloquent.

“People who saw written GTP-3 responses rated them much higher than purely human-typed ones. That was a cool observation.

Morris said he had no official data to share about the test.

Once people learned that letters were co-generated by a machine, though, the benefits of improved typing faded. “Simulating empathy seems weird and empty,” Morris books on Twitter.

When he shared the results of the experiment on Twitter On January 6th, he was inundated with criticism. Academics, journalists and fellow techs have accused him of behaving unethically and deceiving people into becoming test subjects without their knowledge or consent when they were in the vulnerable position of needing mental health support. His Twitter thread has over 8 million views.

Senders of AI-generated messages, of course, know whether they wrote or edited them. But recipients only saw a notification saying: “Someone replied to your post! (Written in collaboration with Koko Bot)” without further details on what a “Koko Bot” is.

In demonstration that Morris published online, GPT-3 responded to a person who talked about the difficulty of becoming a better person. The chatbot said, “I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it on our own. But you’re not alone.”

Morris said no option was provided to opt out of the trial other than not reading the response at all. “If you receive a message, you can choose to skip it and not read it,” he said.

Leslie Wolf, a Georgia State University law professor who writes about and studies research ethics, said she was concerned about how little Koko told people who were getting AI-enhanced answers.

“This is an organization trying to provide much-needed support in a mental health crisis where we don’t have enough resources to meet the needs, and yet when we manipulate people at risk, it won’t go well,” she said. People in mental pain can feel bad, she said. Especially if the AI ​​produces biased or neglected text that is not reviewed.

Now, Koko is on the defensive about her decision, and the entire tech industry is once again facing questions about the casual way she sometimes turns lowly people into lab rats, especially as more tech companies move toward health-related services.

Congress mandated oversight of some of the human testing in 1974 after revelations of adverse experiments including the Tuskegee Syphilis Study, in which government researchers It injected syphilis into hundreds of black Americans who were not treated and sometimes died. As a result, universities and others who receive federal support You must follow strict rules When they conduct human trials, a process that is enforced by what are known as Institutional Review Boards, or IRBs.

But, in general, there are no such legal obligations for private companies or nonprofit groups that do not receive federal support and are not seeking approval from the Food and Drug Administration.

Morris said Cuoco has not received federal funding.

says Alex John London, director of the Center for Ethics and Politics at Carnegie Mellon University and author of book On research ethics, he said in an email.

He said that even if an entity is not required to undergo an IRB review, it should be done in order to reduce risk. He said he would like to know what steps Koko has taken to ensure that research participants are “not the most vulnerable users in acute psychological crisis.”

Morris said that “higher risk users are always directed to crisis lines and other resources” and that “Coco closely monitored responses when the feature was active”.

There are notorious examples of tech companies exploiting a censorship vacuum. In 2014, Facebook revealed that it had been turned on psychological experience Over 689,000 people have shown that it can spread negative or positive emotions like an infection by changing the content of people’s news feeds. Facebook, now known as Meta, I apologize And he overhauled his internal review process, but he also told people must be aware About the possibility of conducting such experiments by reading the Facebook Terms of Service – Post This baffled people Outside the company due to the fact that few people have an understanding of it agreements they make with platforms like Facebook.

But even after a firestorm over the Facebook study, there has been no change in federal law or policy to make oversight of human-on-human experimentation universal.

Coco is not Facebook, with its huge profits and user base. Koko is a non-profit platform and passion project For Morris, a former Airbnb data scientist with a PhD from MIT. It’s a peer-to-peer support service — not a potential downer for professional therapists — and it’s only available through other platforms like Discord and Tumblr, not as a standalone app.

Morris said Coco had about 10,000 volunteers in the last month, and about 1,000 people get help from him every day.

“The broader point of my work is figuring out how to help people in emotional distress online,” he said. “There are millions of people on the Internet who are struggling to get help.”

there shortage nationwide From professionals trained to provide mental health support, even as symptoms of anxiety and depression rose during the coronavirus pandemic.

“We get people in a safe environment to write short letters of hope to each other,” Morris said.

However, critics focused on the question of whether the participants gave informed consent to the experiment.

Camille Nibecker, a University of California, San Diego professor who specializes in human research ethics applied to emerging technologies, said Koko created unnecessary risks for people seeking help. She said the informed consent by one of the research participants includes at least a description of the potential risks and benefits written in clear and simple language.

“Informed consent is very important for conventional research,” she said. “It’s the cornerstone of ethical practices, but when you don’t have a requirement to do so, the public could be put at risk.”

She noted that AI has also annoyed people with the potential for bias. Although chatbots have Reproduction In areas such as customer service, it is still a relatively new technology. This month, New York City Schools banned ChatGPT, a bot based on GPT-3 technology, from School Devices and Networks.

“We’re in the Wild West,” Niebecker said. “It is very dangerous not to have some standards and agreement about the rules of the road.”

The Food and Drug Administration (FDA) regulates certain portable medical apps that it says meet the definition of a “medical device,” such as One that help people try to eliminate opioid addiction. But not all apps meet this definition, and the agency issued it guidance In September to help companies see the difference. In a statement provided to NBC News, the FDA representative said that some apps that provide digital therapy could be considered medical devices, but that, per FDA policy, the organization does not comment on specific companies.

In the absence of formal oversight, other organizations are grappling with how to apply AI in health-related fields. Google that has resist Through its treatment of issues in the ethics of artificial intelligence, the contract “Summit Health BioethicsIn October with the Hastings Center, a nonprofit bioethics think tank and think tank. In June, the World Health Organization included informed consent in one of its “GuidelinesTo design and use artificial intelligence.

Coco has Advisory board Mental health experts to influence the company’s practices, but Morris said there is no formal process for them to approve proposed trials.

It wouldn’t be practical for the board to run a review every time Koko’s product team wants to roll out a new feature or test an idea, said Stephen Schuyler, an advisory board member and professor of psychology at the University of California, Irvine. He declined to say whether Cuoco made a mistake, but said it demonstrated the need for a public conversation about private sector research.

“We really need to think about, as new technologies come online, how do we use them responsibly?” He said.

Morris said he never thought an AI chatbot would solve the mental health crisis, and said he didn’t like how being a Koko supporter turned into an “assembly line” of approving pre-written answers.

But he said pre-written, copy-and-paste answers have long been a feature of online help services, and that organizations need to continue experimenting with new ways to care for more people. He said a review of the trials at the university level would stop this research.

“Artificial intelligence is not the perfect or the only solution. It lacks empathy and originality. But, he added, “we can’t just have a position where any use of AI requires final scrutiny by the IRB.”

If you or someone you know is going through a crisis, call 988 to access the Suicide and Crisis Lifeline. You can also call the network, formerly the National Suicide Prevention Lifeline, at 800-273-8255, text HOME to 741741 or visit SpeakingOfSuicide.com/resources for additional resources.

Leave a Reply

Your email address will not be published. Required fields are marked *