About two seconds after Microsoft let people poke around with its new ChatGPT-powered Bing search engine, people noticed that it was responding to some questions with wrong or nonsensical answers like conspiracy theories. Google had an embarrassing moment when scientists discovered a factual error in the company’s advertising for its chatbot Bard, which subsequently wiped $100 billion from its stock price.
It’s all the more shocking because it hasn’t surprised anyone who’s researched AI language models.
Here’s the problem: the technology just isn’t ready to be deployed at this scale. AI language models are notorious bullshitters, often presenting untruths as facts. They are excellent at predicting the next word in a sentence, but they don’t know what the sentence actually means. This makes it incredibly dangerous to combine them with search where getting the facts straight is crucial.
OpenAI, the creator of the hit AI chatbot ChatGPT, has always emphasized that it’s still just a research project and that it’s always improving as it gets people’s feedback. That hasn’t stopped Microsoft from including it in a new version of Bing, though with caveats that search results may not be reliable.
Google has used natural language processing for years to help people search the web using full sentences instead of keywords. So far, however, the company has been reluctant to integrate its own AI chatbot technology into its signature search engine, says Chirag Shah, a professor at the University of Washington who specializes in online search. Google leadership was concerned about the “reputational risk” of rushing out a ChatGPT-like tool. The irony!
Big Tech’s recent failures don’t mean AI-powered search is a lost cause. One way Google and Microsoft have tried to make their AI-generated search summaries more accurate is by offering citations. Linking to sources allows users to better understand where the search engine gets its information from, says Margaret Mitchell, a researcher and ethicist at AI startup Hugging Face, who used to lead Google’s AI ethics team.
It might even help give people a more diverse perspective on things, she says, by getting them to consider more sources than they otherwise would.
But that doesn’t change the fundamental problem that these AI models invent information and confidently present untruths as facts. And ironically, if AI-generated text looks reliable and cites sources, it could make users even less likely to double-check the information it’s showing.