Morgan Stanley is testing OpenAI’s chatbot that sometimes ‘hallucinates’ to see if it can help financial advisors

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/03/GettyImages-1247512638-e1678831436962.jpg?w=2048

OpenAI’s chatbot, ChatGPT, had gained millions of users for its ability to create long prose based on short prompts. 

Despite its rapid adoption, the chatbot isn’t without its problems. The tool can sometimes make errors, or “hallucinate,” analysts at investment bank Morgan Stanley said in a note last month. At the time, the analysts wrote that ChatGPT can “generate answers that are seemingly convincing, but are actually wrong.” 

Yet that hasn’t stopped Morgan Stanley from testing an OpenAI-powered chatbot with its financial advisors. The goal is to help the advisors make the most of the bank’s huge library of research and data resources, the company announced Tuesday.

The experiment is aimed at “helping investment professionals parse through thousands of pages of our in-depth intellectual capital, analyst commentary, and market research in seconds – a process that typically could take more than half an hour,” Morgan Stanley Wealth Management’s head of analytics and data, Jeff McMillan told Fortune. “This will help advisors spend more time focusing on serving their clients.”

The tool will use the technology’s latest version, GPT-4, which launched on Tuesday. It’s currently being tested with 300 advisors, CNBC reported, and when released more broadly, will aid all of Morgan Stanley’s 16,000 advisors.

The move marks the first known use of ChatGPT in the banking sector, which has faced a tumultuous few days amid the failure of Silicon Valley Bank. In the financial technology space, Stripe, an online payments processing company, is also testing a version of ChatGPT to fight fraud. 

Morgan Stanley is no stranger to using artificial intelligence. It currently uses it to understand and cater to client needs by matching them with the right financial advisors.

The bank’s use of GPT-4 will be intended to assist human advisors, much like a research assistant would. But it’s not supposed to replace human advisors, who are still needed to interact with clients. 

“These things (A.I. tools) don’t have any empathy; they’re just very clever math that is able to regurgitate knowledge,” McMillan told CNBC.

Even though Morgan Stanley’s analysts acknowledged the ChatGPT’s struggle with accuracy, they didn’t write off the technology entirely. In a February note, they wrote that the “A.I. hype is worth considering seriously” and that it had “real market potential.”

McMillan said that using ChatGPT in banks comes with inevitable complications. For example, the banking business is highly regulated and involves the vital function of handling people’s money. 

“What makes the work we’re doing particularly interesting is that it is no small feat to integrate technology into a highly complex and regulated environment like ours, and to do so with the appropriate controls installed,” McMillan told Fortune

OpenAI did not immediately return Fortune’s request for comment.

Fortune‘s CFO Daily newsletter is the must-read analysis every finance professional needs to get ahead. Sign up today.