Saturday, November 16

The tool to identify AI-generated photos, according to OpenAI, is 99% accurate

OpenAI is developing a programme that can detect photos made by artificial intelligence (AI) with high accuracy.

The creator of well-known chatbot ChatGPT and image generator DALL-E, Mira Murati, claimed on Tuesday that OpenAI’s tool is “99% reliable” at identifying whether a photo was created using AI. She stated that it is undergoing internal testing prior to a scheduled public release but did not provide a time frame.

Sam Altman, the chief executive of OpenAI, and Murati both spoke at the Wall Street Journal Tech Live conference in Laguna Beach, California.

There are already a few tools available, however they may not accurately identify photographs or other information created by AI. For instance, OpenAI created a similar tool in January that was meant to identify language that was generated by AI, but it was cancelled in July due to its reliability issues. The business claimed it was devoted to finding techniques to determine whether audio or visual media had been produced using AI as well as to upgrading that software.

Given that AI technologies can be used to modify or manufacture news coverage on major world events, the necessity for such detection systems is only increasing in importance. Another part of the problem is addressed by Adobe Inc.’s Firefly image generator, which guarantees not to produce anything that violates the creators’ intellectual property rights.

The OpenAI leaders also provided some information on Tuesday regarding the AI model that will come after GPT-4. OpenAI applied for a “GPT-5” trademark with the US Patent and Trademark Office in July, even though the startup hasn’t publicly stated what the successor model to GPT-4 may be called.

Chatbots like ChatGPT, which uses GPT-4 and a previous model, GPT-3.5, have a tendency to fabricate information, a condition known as hallucination. When asked if a GPT-5 model would be less likely to do this, Murati responded, “Maybe.”

We’ll see. “With GPT-4, we’ve made a lot of progress, but we’re still not where we need to be,” she said.

Altman also discussed the idea that OpenAI would develop its own computer processors rather than relying on those made available by suppliers like the market leader Nvidia Corp. for the purpose of operating and training its AI models.

The natural course of action would be to say no, he continued, “but I would never rule it out.”

Leave a Reply

Your email address will not be published. Required fields are marked *