ChatGPT Will “Hallucinate” Wrong Answers for Years: Morgan Stanley

  • According to Morgan Stanley, ChatGPT will continue to give occasional wrong answers for a few more years.
  • The AI ​​bot sometimes “hallucinates,” meaning it generates answers that appear convincing but are actually wrong, according to the bank.
  • The way forward for ChatGPT and AI is to run the technology on edge devices, including cell phones, Morgan Stanley said.

According to Morgan Stanley, ChatGPT will continue to “hallucinate” occasional wrong answers for years to come and won’t take off until it’s on your phone.

In a note on Wednesday, the US investment bank highlighted the shortcomings of the AI ​​chatbot and said it occasionally makes up facts. “Speaking of high accuracy tasks, it’s worth noting that ChatGPT is sometimes hallucinating and can generate answers that appear convincing but are actually incorrect,” wrote Morgan Stanley analysts led by Shawn Kim.

“At this stage, the best practice is to have highly skilled users spot the bugs and use generative AI applications to complement existing work, not as a replacement,” they added.

ChatGPT, developed by OpenAI, recently became famous after Microsoft pumped $10 billion into the company. While its debut sparked a sudden frenzy in AI stocks, it was also judged. Academics have warned that platforms like ChatGPT could print misinformation. For example, Insider’s Samantha Delouya asked the language tool to write a news article — and it spat out fake quotes from Carlos Tavares, CEO of Jeep maker Stellantis.

Top voices like Mark Cuban have pounced on the chatbot for this reason, saying the tool will only make misinformation worse.

“Accuracy will remain a challenge for years to come,” Morgan Stanley’s Kim said on ChatGPT.

But there might be a solution to the inaccuracies of AI platforms. It does so by connecting large language tools like ChatGPT to specialized domains to verify specific information, the analysts said.

At the same time, according to the bank, tools like ChatGPT could be significantly improved by edge computing. “However, to scale AI to even more applications, they would need to run directly on edge devices, which typically don’t have high-performance GPUs embedded,” Kim said.

Edge computing is a model that places computing power closer to where the data is created in the physical world. Examples of this are mobile phones, intelligent cameras or computers in cars. Kim highlighted four reasons why running AI at the edge is beneficial. It minimizes latency compared to cloud computing services, is less expensive, allows privacy, and is more consumer-friendly.

“At the current stage it is still mainly used for writing text/code. But we’re just at the beginning of the technology curve and expect exponential growth through different versions,” Kim said in reference to ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *