Connect with us

Hi, what are you looking for?

Webinar Expert TeamWebinar Expert Team

Tech News

ChatGPT’s search results for news are ‘unpredictable’ and frequently inaccurate

A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
Illustration: The Verge

Based on testing done by Columbia’s Tow Center for Digital Journalism researchers, OpenAI’s ChatGPT search tool has some issues when it comes to responding with the truth.

OpenAI launched the tool for subscribers in October, saying it could give “fast, timely answers with links to relevant web sources.” Instead, Futurism points out that the researchers said ChatGPT search struggled to correctly identify quotes from articles, even when they came from publishers with arrangements to share data with OpenAI.

The authors asked ChatGPT to identify the source of “two hundred quotes from twenty publications.” Forty of those quotes were taken from publishers who’d disallowed OpenAI’s search crawler from accessing their site. Yet, the chatbot confidently replied with false information anyway, rarely admitting it was unsure about the details it gave:

In total, ChatGPT returned partially or entirely incorrect responses on a hundred and fifty-three occasions, though it only acknowledged an inability to accurately respond to a query seven times. Only in those seven outputs did the chatbot use qualifying words and phrases like “appears,” “it’s possible,” or “might,” or statements like “I couldn’t locate the exact article.”

A chart showing how often ChatGPT answered confidently or was unsure, with a breakdown of how often its confident replies were “Wrong,” (89) “Partially Correct,” (57) and “Correct” (47).
Image: Columbia Journalism Review
ChatGPT was fully or partially wrong more than right, but almost always confidently so.

The Tow Center test’s authors documented ChatGPT search results that misattributed a letter-to-the-editor quote from the Orlando Sentinel to a story published in Time. In another example, when asked to identify the source of a quote from a New York Times article about endangered whales, it returned a link to a different website that had wholly plagiarized the story.

“Misattribution is hard to address without the data and methodology that the Tow Center withheld,” OpenAI told the Columbia Journalism Review, “and the study represents an atypical test of our product.” The company went on to promise to “keep enhancing search results.”

You May Also Like

Tech News

The new AirPods 4 deliver better sound and voice call quality than prior models. | Photo by Chris Welch / The Verge If you...

Editor's Pick

Marc Joffe In recent posts, I have looked at costly light rail projects in Austin and Minneapolis whose sponsors are seeking federal funding. Not...

Editor's Pick

Vanessa Brown Calder President-elect Donald Trump has signaled an appetite for change with his new Department of Government Efficiency and various cabinet picks. While...

Editor's Pick

Where might you invest as the year winds down and holiday spending kicks into high gear? A look at historical seasonality trends might help...