GPT-4 will make ChatGPT smarter but will not fix its bugs

With his weirdness The chatbot ChatGPT’s ability to hold a conversation, answer questions, and write coherent prose, poetry, and code has forced many people to reconsider the potential of artificial intelligence.

The startup that developed ChatGPT, OpenAI, today announced a highly anticipated new version of the AI ​​model at its core.

The new algorithm, called GPT-4, follows GPT-3, a groundbreaking text generation model that OpenAI announced in 2020 and later adapted to create ChatGPT last year.

According to OpenAI, the new model performs better on a range of tests measuring the intelligence and knowledge of humans and machines. It also makes fewer mistakes and can respond to both images and text.

However, GPT-4 suffers from the same issues that have plagued ChatGPT and cause some AI experts to be skeptical of its usefulness – including tendencies to “hallucinate” false information, display problematic social bias, and misbehave or accepting disruptive roles when faced with an “opponent’s request.

“Even though they’ve made great strides, it’s clearly untrustworthy,” says Oren Etzioni, professor emeritus at the University of Washington and founding CEO of the Allen Institute for AI. “It’s going to be a long time before you want a GPT running your nuclear power plant.”

OpenAI provided several demos and data from benchmarking tests to demonstrate the capabilities of GPT-4. The new model not only surpassed passing the Uniform Bar Examination, which is used in many US states to qualify attorneys, but also scored in the top 10 percent of that of humans.

It also performs better than GPT-3 on other exams designed to test knowledge and reasoning in subjects such as biology, art history, and calculus. And it scores better than any other AI language model on tests designed by computer scientists to measure progress in such algorithms. “In a way, it’s more the same,” says Etzioni. “But it’s more of the same in an absolutely mind-blowing series of advances.”

GPT-4 can also do neat tricks previously known from GPT-3 and ChatGPT, like merging and suggesting changes to pieces of text. It can also do things its predecessors couldn’t, including acting as a Socratic tutor, helping guide students to correct answers and discussing the content of photos. For example, if a photo of ingredients on a kitchen counter is provided, GPT-4 can suggest an appropriate recipe. If accompanied by a diagram, it can explain the conclusions that can be drawn from it.

“It definitely seems to have gained some capabilities,” says Vincent Conitzer, an AI professor at CMU who has started experimenting with the new language model. But he says it still makes mistakes, like suggesting nonsensical directions or presenting bogus mathematical proofs.

ChatGPT caught the public’s attention with an amazing ability to tackle many complex questions and tasks through an easy-to-use conversational interface. The chatbot doesn’t understand the world the way humans do and only responds with words that it statistically predicts that should follow a question.

Leave a Reply

Your email address will not be published. Required fields are marked *