A study by the Center for Digital Journalism at Columbia University has raised concerns about the reliability of OpenAI's new ChatGPT-powered search engine. The research indicates the search service struggles with accurate source attribution.
While the AI search engine provides quick answers with links to web sources, the researchers found it often misidentifies the origin of quotes. They tested the engine with 200 quotes from 20 publications, including 40 quotes from websites that OpenAI has been blocked from accessing.
ChatGPT provided partially or completely incorrect answers in 153 out of 200 cases, but only admitted its inability to accurately answer in seven instances. In those seven cases, it used qualifying language like "seems," "possibly," or "may," and statements such as "I couldn't find the exact article," the researchers noted.
In one instance, the search engine incorrectly attributed a quote from a letter to the editor in the *Orlando Sentinel* to an article published by *Time*. In another, it was asked to find the source of a quote from a *New York Times* article about endangered whales. ChatGPT linked to a website that had simply copied the text of the *Times* article.
OpenAI has responded to the study, stating that the findings are difficult to assess without full access to the data and methodology used. They have, however, pledged to continue improving the search engine's accuracy.