ChatGPT maker releases AI detection tool

San Francisco The maker of ChatGPT is trying to curb its reputation as a free cheating machine with a new tool that can help teachers detect if a student or an AI wrote that homework.

The new AI Text Classifier launched on Tuesday by OpenAI follows A Long weeks of discussion in schools and colleges over concerns that ChatGPT’s ability to write anything on command could fuel academic lying and hinder learning.

OpenAI warns that the new tool – Like others already available – not guaranteed. Jan Laiki, head of OpenAI’s compliance team tasked with making its systems more secure, said the way AI typed text is detected “is imperfect and will sometimes be wrong.”

“For this reason, she should not be solely relied upon when making decisions,” Lakey said.

Teenagers and college students were among the millions of people who started trying out ChatGPT after it was launched on November 30 as a free app on the OpenAI website. And while many have found ways to use it creatively and harmlessly, the ease with which it can answer home test questions and help with other tasks has created consternation among some teachers.

By the time schools opened for the new year, New York City, Los Angeles, and other large public school districts had begun banning their use in classrooms and on school equipment.

The Seattle Public School District initially banned ChatGPT on all school devices in December, but then opened access to educators who wanted to use it as an educational tool, said Tim Robinson, a district spokesperson.

“We can’t ignore that,” said Robinson.

The district is also discussing the possibility of extending ChatGPT use in classrooms to allow teachers to use it to train students to be better critical thinkers and to allow students to use the app as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.

School districts across the country say they see the conversation around ChatGPT evolving rapidly.

“The initial reaction was, ‘OMG, how are we going to stop the wave of cheating that’s going to happen with ChatGPT,'” said Devin Page, technology specialist at Calvert County Public School in Maryland. the solution.

Regions like him, he believes, will eventually unblock ChatGPT, said Page, especially once the company’s detection service is in place.

OpenAI confirmed its detection tool’s limitations in a blog post on Tuesday, but said that in addition to deterring plagiarism, it can help prevent plagiarism. Automated disinformation campaign detection and other forms of misuse of artificial intelligence to imitate humans.

The longer the text, the better the tool will be at detecting whether an AI or human has written something. Type any text—a college admissions essay, or a literary analysis of Ralph Ellison’s “The Invisible Man”—and the tool will classify it as either “Highly Unlikely, Unlikely, Unclear Whether Likely, or Possibly”-generated.

But much like ChatGPT itself, who has been trained On a huge collection of digital books, newspapers and online writings, but often confidently broadcast lies or nonsense, it is not easy to explain how to come to a conclusion.

“We basically don’t know what kind of pattern he’s interested in, or how it works internally,” Lakey said. “There’s not much we can say at this point about how the classifier actually works.”

Higher education institutions around the world have also begun to discuss the responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, banned its use last week and warned that anyone surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.

In response to the backlash, OpenAI said it has been working for several weeks on drafting new guidelines to help educators.

“Like many other technologies, a district may decide that it is not suitable for use in its classrooms,” said OpenAI policy researcher Lama Ahmed. “We’re not pushing them one way or the other. We just want to give them the information they need to be able to make the right decisions for them.”

It’s an unusually public role for a research-oriented San Francisco startup, right now Backed by billions of dollars in investment from its partner Microsoft and is facing increasing interest from the public and governments.

French Digital Economy Minister Jean-Noel Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland, that he was optimistic about the technology. But the government minister — a former professor at MIT and the French business school HEC in Paris — said there were also difficult ethical questions to address.

“If you’re in law school, there’s room for concern because obviously ChatGPT, among other tools, will be able to deliver relatively impressive exams,” he said. “If you’re a member of the Economics School, you’re fine because ChatGPT will have a hard time finding something to expect or offer when you’re in the Graduate School of Economics.”

He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases may exist.

—-

O’Brien reported from Providence, Rhode Island. Associated Press writer John Lister contributed to this report from Paris.

Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed without permission.

Leave a Comment