ChatGPT’s creator rolls out an “imperfect” tool to help educators spot potential cheating
After two months of OpenAI some educators were concerned with The public version of ChatGPTAn AI-powered chatbot that can help students and professionals create shockingly persuasive essays, the company is unveiling a new tool to help educators adapt.
OpenAI announced on Tuesday A new feature called “Artificial Intelligence Text Classifier”, which allows users to check whether an article was written by a human or AI. But even OpenAI admits that it is “imperfect”.
The tool, which works on English-language text generated by artificial intelligence, is powered by a machine learning system that takes inputs and assigns them to several categories. In this case, after pasting a block of text such as a school essay into the new tool, it will give one of five possible results, ranging from “likely caused by AI” to “highly unlikely”.
Lama Ahmed, director of policy research at OpenAI, told CNN that educators are calling for a ChatGPT feature like this one, but warns that it should be “considered.”
“We don’t really recommend taking this tool in isolation because we know it can be wrong and it will be wrong sometimes – like using AI for any kind of evaluation purpose,” Ahmed said. “We stress how important it is to keep people in the loop…and that this is just one data point among many others.”
Ahmed points out that some teachers pointed to past examples of students’ work and writing style to gauge whether a student wrote it or not. While the new tool may provide another point of reference, Ahmed said, “Teachers need to be really careful how they include it in academic dishonesty decisions.”
Since it was made available in late November, ChatGPT has been used Create original articlesand stories and song lyrics in response to user prompts. She drafted abstracts of research papers that fooled some scientists. Until recently He passed the law exams On four courses at the University of Minnesota, another exam at the University of Pennsylvania’s Wharton School of Business, and a US medical licensing exam.
In the process, he raised alarm among some of the teachers. Public schools in New York City and Seattle already have it banned Students and teachers can use ChatGPT on area networks and devices. Some teachers are now moving remarkably quickly to Rethink their tasks In response to ChatGPT, even as it is not clear how widespread the tool is being used among students and how harmful it really is to learning.
OpenAI now joins a small but growing list of efforts to help educators discover when to create ChatGPT-enabled work. Some companies like Turnitin are actively working on ChatGPT plagiarism detection tools that can help teachers determine when assignments have been written by the tool. Meanwhile, Princeton student Edward Tuan told CNN that more than 95,000 people have already tried the beta version of its ChatGPT discovery feature, called ZeroGPT, noting that there is “incredible demand among educators” so far.
Jan Laiki – Lead on the OpenAI Alignment team, which works to ensure that the AI tool is aligned with human values - List the many reasons why ChatGPT spoofing can be a challenge. People can edit text to avoid being recognized by the tool, for example. It will also be “best at selecting text that is very similar to the type of text we trained it on.”
In addition, the company He said it’s impossible to determine whether predictable text — such as a list of the first 1,000 prime numbers — was written by AI or a human because the correct answer is always the same, according to a company blog. The classifier is also “very unreliable” on short texts under 1,000 characters.
During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully ranked several working groups. An excerpt from Peter Pan, for example, considered it “unlikely” that artificial intelligence could ever be created. However, in the company’s blog post, OpenAI said it incorrectly classified human-written text as AI-written 5% of the time.
Despite the possibility of false positives, Lake said the company aims to use the tool to spark conversations about AI literacy and possibly deter people from claiming that an AI-written text was generated by a human. He said the decision to release The new feature also stems from the debate over whether humans have the right to know if they are interacting with AI.
This question is much bigger than what we’re doing here. Society as a whole must grapple with this question.
OpenAI said it encourages the general public to share feedback about the AI verification feature. Ahmed said the company continues to speak with K-12 teachers and those at the university level and beyond, such as Harvard University and Stanford School of Design.
The company sees its role as “a mentor to educators,” according to Ahmed, in the sense that OpenAI wants to make them more “aware of technologies and what they can and shouldn’t be used for.”
“We’re not teachers ourselves — we’re very aware of that — and so our goals are really to help equip teachers to effectively deploy these models in and out of the classroom,” Ahmed said. “This means giving them the language to talk about, helping them understand the capabilities and limitations, and then, through them secondarily, equipping students to navigate the complexities that AI already presents in the world.”