OpenAI shuttered a tool that was supposed to tell human writing from AI due to a low accuracy rate. In an, OpenAI said it decided to end its AI classifier as of July 20th. “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company said.
As it shuts down the tool to catch AI-generated writing, OpenAI said it plans to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” There’s no word yet on what those mechanisms might be, though.
OpenAI fully admitted the classifier was never very good at catching AI-generated text and warned that it could spit out false positives, aka human-written text tagged as AI-generated. OpenAI, before it added its update shutting down the tool, said the classifier could get better with more data.
After OpenAI’s ChatGPT burst into the scene and became one of the fastest-growing apps ever, people scrambled to grasp the technology. Several sectors raised the alarm around AI-generated text and art, particularly educators who were worried students would no longer study and just let ChatGPT write their homework.on school grounds amid concerns about accuracy, safety, and cheating.
Misinformation via AI has also been a concern, with studies showing AI-generated text,, might be more convincing than ones written by humans. Governments haven’t yet figured out how to rein in AI and, thus far, are leaving it to individual groups and organizations to set their own rules and develop their own protective measures to handle the onslaught of computer-generated text. And it seems that for now, no one, not even the company that helped kickstart the generative AI craze in the first place, has answers on how to deal with it all. Though , it’s only going to get harder to easily differentiate AI and human work.