Banks have used A.I. for decades—but now it’s going to take off like never before

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/07/Finance_AI_Intersect.jpg?w=2048

A.I. is now the buzzword of buzzwords, and for so many who have prompted the chatbot ChatGPT or image generator Stable Diffusion for the first time, the output of full-length essays or photorealistic images in seconds is astounding. It’s no wonder that CEOs and CFOs so frequently point to A.I.’s potential to transform their businesses. 

“What has been powerful is the vividness of generative A.I.—the fact that CFOs can play with it themselves,” Michael Birshan, global co-leader of McKinsey’s strategy and corporate finance practice, recently told Fortune.

What’s often overlooked amid the recent hype, however, is that A.I. has been integral to the financial industry for at least a decade and—depending on how you define artificial intelligence—arguably much longer.

“Conceptually, it’s not new,” Neal Baumann, the global financial services leader at Deloitte, told Fortune, in reference to A.I. “What’s happening is, we’ve probably taken arguably a pretty significant step change in the last two years or so around sophistication.”

Here’s how A.I. has already changed finance—and what could come next.

Regression to the ATM

Computers have automated the financial industry for more than 50 years.

In the early 1970s, banks adopted the first Automated Teller Machines, and by 1984, 42% of American families had ATM cards, according to economist James Bessen in Learning by Doing.

Similarly, in 1975, Vanguard unveiled the world’s first index fund, a “passively managed portfolio of investments that are bought and sold based on a static algorithm,” writes Seth Oranburg, a law professor who recently published A History of Financial Technology and Regulation.

The above automations, however, were static, not predictive, algorithms. Computers were not making decisions so much as implementing simple, programmatic instructions. This changed when financial institutions used regression models widely in their operations, according to Gal Krubiner, CEO and cofounder of the A.I.-powered loan facilitator Pagaya.

“A.I. is actually a set of models whose whole purpose is to predict something,” he told Fortune. And if artificial intelligence is, at its core, computer-generated prediction, then regressions, or statistical models that take in existing data to forecast trends, would be some of the first to be widely implemented in the financial industry, he said. 

In the 1990s, he estimated, lenders started using these regression models—which ingest a customer’s outstanding debt, income, and a variety of other attributes—to predict whether that customer would qualify for a specific loan.

And this practice spread to most—if not all—corners of finance, from insurance providers to fraud detection to market analysis and trading. Even now, Krubiner told Fortune, complex regression models form the backbone of finance.

“That is still the most dominant way to price loans in the U.S. in many markets,” he said. 

The advent of A.I.

Beginning in the late 2000s and early 2010s, A.I. as we know it began cropping up across the financial industry. Even A.I.-powered chatbots designed specifically for banks aren’t new, according to Zor Gorelov, cofounder and CEO of Kasisto, a company that creates what it calls “intelligent digital assistants” for financial institutions.

Gorelov cofounded Kasisto, a spinoff from SRI International (originally called Stanford Research Institute), in 2013. Since then, he and his company, named after the word for “bank teller” in Esperanto, have worked to develop A.I.-powered chatbots for a large swath of the industry, including JPMorgan Chase and Westpac, the Australian banking behemoth.

“All of the algorithms that we’re hearing about now…were invented in the ’80s,” Gorelov told Fortune. “We just didn’t have the compute power and the data to be able to build the systems that we’re building now.”

But the chatbots that Kasisto designed were not the only existing implementations of artificial intelligence in finance.

Before the ChatGPT hype, and as early as 2017, companies like AppZen were already selling large corporations on A.I. models that detect fraudulent charges on employees’ expense reports. That same year, JPMorgan was also documented to be using A.I. to synthesize and interpret commercial loan agreements. And as of 2019, others were already designing A.I. bots to automate bond sales.

Yet, even now, adoption of “advanced analytics,” or data-powered decision making (which includes A.I.), still varies across firms, according to Vik Sohoni, global leader of McKinsey’s digital and analytics for banking. The use of advanced analytics “is really a cultural facet of any institution: how much you rely on data analytics versus gut instinct,” he said.

The rise of generative A.I.

Regardless of the breadth of adoption, artificial intelligence is deeply integrated into finance. Yet, experts are clear that this recent wave of machine-learning fervor is, as so many like to call it, an “inflection point.”

“We know that A.I. has been applied in call centers and on online chat for a long time now,” Baumann of Deloitte told Fortune. “How you can scale that experience and apply that experience consistently has gone way through the roof.”

Sohoni, the senior partner at McKinsey, says that generative A.I. models, which “generate” new content based on terabytes and terabytes of training data, can spread “personalization writ large” across finance.

This means that banks could conceivably generate custom credit cards with financial rewards directly targeted to a consumer who, for example, dines out more often than not, never shops on Amazon, and flies weekly. Or generative A.I. could lead institutions to create personalized bundles of financial products—a custom checking account, credit card, loan options, etc.—for one customer.

But with generative A.I.’s power comes a laundry list of risks. There are already alleged instances, for example, of bias seeping into algorithmic finance. In 2019, Apple released the Apple Card, but there were soon allegations that the algorithm that was evaluating an applicant’s creditworthiness gave women a significantly lower borrowing limit than men. 

Sohoni also pointed out other risks, including regulatory compliance on privacy and the capacity for models like ChatGPT to “hallucinate,” or simply make things up.

He is cautiously optimistic, however, about the future of A.I. but pleaded for institutions to step carefully. “Financial institutions, like a lot of regulated industries, have to be very prudent,” he said. “You can’t afford to break the trust the customer has in you.”