Computing Guru Criticizes ChatGPT AI Technology For Inventing Things

Vint Cerf, one of the founding fathers of the internet, has some harsh words for the suddenly hot tech behind AI chatbot ChatGPT: “snake oil.”

Google’s internet evangelist wasn’t entirely convinced by the artificial intelligence technology behind ChatGPT and Google’s own competing Bard, dubbed the Grand Language Model. But speaking at Celesta Capital’s TechSurge Summit on Monday, he cautioned about ethical issues in a technology that can generate plausible-sounding but incorrect information, even when trained on factual data.

If an executive tried to get him to use ChatGPT on a business problem, his response would be to call it snake oil, which referred to counterfeit drugs sold by quacks in the 19th century, he said. Another ChatGPT metaphor involved kitchen appliances.

“It’s like a lettuce shooter — you know how the lettuce goes everywhere,” Cerf said. “The facts are all over the place and it mixes them up because it doesn’t know any better.”

Cerf received the 2004 Turing Award, the highest honor in computing, for helping develop the Internet foundation called TCP/IP, which transfers data from one computer to another by breaking it up into small, individually addressed packets containing different Paths from source to destination can take. He’s not an AI researcher, but he’s a computer engineer who would love to see his peers improve on AI’s shortcomings.

OpenAI’s ChatGPT and competitors like Google’s Bard have the potential to significantly transform our online lives by answering questions, composing emails, summarizing presentations, and performing many other tasks. Microsoft has begun to integrate OpenAI’s language technology into its Bing search engine, which poses a major challenge for Google, but it uses its own index of the web to try to “ground” OpenAI’s flights of fancy with authoritative, trusted documents “.

Cerf said he was surprised to learn that ChatGPT could fabricate false information on a factual basis. “I asked him, ‘Write me a bio of Vint Cerf.’ A few things went wrong,” Cerf said. It was then that he learned the inner workings of the technology — that it uses statistical patterns identified from vast amounts of training data to construct its response.

“It knows how to string together a sentence that’s probably grammatically correct,” but it has no real knowledge of what it’s saying, Cerf said. “We are still a long way from the self-awareness that we wish for.”

OpenAI, which launched a $20 plan to use ChatGPT in early February, was aware of the technology’s shortcomings but aims to improve it through “continuous iteration.”

“ChatGPT sometimes writes answers that sound plausible but are incorrect or nonsensical. Fixing this issue is a challenge,” the AI ​​research lab said when it launched ChatGPT in November.

Cerf is also hoping for progress. “Engineers like me should be responsible for finding a way to tame some of these technologies so they cause fewer problems,” he said.

Cerf’s comments contrasted with those of another Turing Prize winner at the conference, chip design pioneer and former Stanford President John Hennessy, who offered a more optimistic assessment of AI.

Editor’s note: CNET uses an AI engine to create some personal finance statements, which are edited and fact-checked by our editors. See this post for more information.

Leave a Reply

Your email address will not be published. Required fields are marked *