Chai Shai Chaishai with me

Due to “low accuracy,” the creator of ChatGPT has withdrawn an AI detection tool.

OpenAI, makers of ChatGPT, have removed an artificial intelligence (AI) detection tool that may have helped educators and other professionals spot fake work.

After a week of “low rate of accuracy,” OpenAI discreetly discontinued the tool last week, as explained in a revised blog post announcing the addition.

“We are looking at more efficient provenance mechanisms for text and are striving to include your suggestions,” the business said in the update. OpenAI has also stated that it will assist “users to understand if audio or visual content is AI-generated.”

This may reignite questions about the security preparedness of the corporations developing the latest generation of generative AI tools. It also coincides with the first pre-school through 12th grade school year in which resources like ChatGPT are freely available to teachers.

The rapid ascent of ChatGPT late last year caused some teachers to express concern that their students might be able to more easily cheat on written assignments than ever before. ChatGPT has been banned from district-issued devices and networks in New York City and Seattle public schools. Even though it was uncertain how prevalent usage of the program was among students and how destructive it could really be to learning, some educators moved with amazing speed to reconsider their assignments in response to ChatGPT.

In this context, OpenAI unveiled their AI detection tool in February, enabling users to determine whether or not a given essay was written by a human. With the help of a machine learning system that accepts an input and assigns it to different categories, the feature was able to function on English AI-generated text. When users pasted in a piece of writing like an essay for school, the new tool returned one of five results, ranging from “likely generated by AI” to “very unlikely.”

On the day of its release, however, OpenAI acknowledged the tool was “imperfect” and that its results should be “taken with a grain of salt.”

The policy research director at OpenAI, Lama Ahmad, said at the time, “We really don’t encourage taking this tool in isolation since we know that it can and will be inaccurate at times – much like utilizing AI for any kind of assessment reasons.”

“Teachers need to be really careful in how they include it in academic dishonesty decisions,” Ahmad cautioned, despite the tool’s potential usefulness in assisting with making such determinations by comparing previous samples of a student’s work and writing style.

OpenAI may temporarily shelve its tool, but there are other options available.

Companies like Turnitin have released AI-powered plagiarism detection technologies that might be used in the classroom to help instructors spot instances of automated writing. Edward Tuan, a student at Princeton, has also developed a comparable AI detecting feature; he calls it ZeroGPT.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button