Connect with us

Hi, what are you looking for?

Webinar Expert TeamWebinar Expert Team

Tech News

OpenAI’s new model is better at reasoning and, occasionally, deceiving

Photo collage of a computer with the ChatGPT logo on the screen.
Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.

While AI models…

Continue reading…

You May Also Like

Tech News

Starliner just after undocking from the ISS. | Screenshot: YouTube The Boeing Starliner spacecraft successfully completed its uncrewed flight back to Earth, NASA announced...

Editor's Pick

Romina Boccia Transparency and accessibility of information about federal spending are critical for equipping citizens to hold their representatives accountable. The Congressional Budget Office...

Editor's Pick

Adam N. Michel Tax policy has taken on an outsized role in this year’s presidential campaign and was mentioned repeatedly in the recent presidential...

Editor's Pick

In this StockCharts TV video, Mary Ellen reviews the broader markets and highlights pockets of strength that are starting to trend higher. She also shares...