- Some of Wall Street’s largest banks, including Citi and Morgan Stanley, are in the processing of forming a working group aimed at understanding the risks associated with using artificial intelligence, according to three sources involved in the project.
- The group, which initially began to form earlier this year, comes at a time when financial firms are eager to implement AI across various business lines but lack a complete understanding of the complex technology and guidance from regulators.
- While no goals have officially been set, sources said, the overall hope is to develop better guidelines about how to pursue using AI.
- Click here for more BI Prime stories.
Wall Street is as competitive as it gets, but sometimes everyone benefits from working together. And when it comes to understanding the intricacies of artificial intelligence, some of Wall Street’s largest financial firms have recognized the benefit of putting their heads together.
Citi and Morgan Stanley are among a group of large global banks banding together to create a working group examining the potential risks they may face when using artificial intelligence, according to three sources involved in the project.
While it’s still early days, and specific goals for the group haven’t been established, the hope is that by working together Wall Street will develop a better understanding of how best to use the innovative technology appropriately, according to the sources. The impetus for forming the group earlier this year, one source said, was the recognition that there are risks around the usage of artificial intelligence that needed to be addressed as Wall Street continues to adopt more the tech.
Spokespeople for Morgan Stanley and Citi declined to comment.
Wall Street stands to benefit from taking a unified approach to understanding how best to use AI in the absence of direction from regulators. Rulemakers have largely avoided crafting specific regulation pertaining to the appropriate use of AI in finance.
Meanwhile, banks have put significant resources towards development around AI-based tools in recent years with the hopes of cutting cost and gaining a competitive edge. As firms get more comfortable with the technology, the laundry list of use cases where AI can be applied to improve manual, labor-intensive processes continues to grow. Banks are experimenting applying AI to everything from chatbots and fraud detection to more market-facing areas such as trading and risk management.
According to a report published by IHS Markit in April, the global business value of artificial intelligence in finance will be $300 billion by 2030. In 2018, the report estimated $41.1 billion in cost savings and efficiencies was recognized thanks to AI’s use on Wall Street.
However, for all the promises of AI improving how things are done, risks still remain. Interpretability and explainability are two major hurdles. The former refers to understanding how an AI-based tool reaches a solution. The latter is the ability to explain to someone — like a regulator — how that solution was reached.
With the use of more sophisticated AI, such as deep learning techniques that include series of complex, ever-evolving calculations, it becomes increasingly difficult for firms to have full transparency into how AI-based tools work. And while the use of a so-called “black boxes” might be acceptable in some areas of the bank, applying it to spaces that directly touch consumers — such as credit decisions — will likely be deemed unacceptable by regulators.
There’s also the impact the use of the technology will have on the current workforce. For many, AI tools will make jobs easier. However, others might find themselves out of work. By 2030, 1.3 million US workers will have their jobs affected in some way by the introduction of AI, according to the IHS Markit report.
While regulators have yet to propose AI-specific rules for finance, that could soon change. Recently, Jelena McWilliams, chairman at the Federal Deposit Insurance Corporation, said at a fintech event her group would push forward with guidance around AI usage for banks if other regulators couldn’t agree on a joint proposal, as first reported by American Banker.
The working group isn’t the first time Wall Street has chosen to take a look at the potential risks posed by AI usage. Bank of America became the founding donor of Harvard’s Kennedy School of Government’s Council on the Responsible Use of Artificial Intelligence in 2018. The goal of the group is to bring together leaders in business, academics, and government to better understand the appropriate usage of AI.
Cathy Bessant, Bank of America’s chief operations and technology officer, spoke at the time about the new territory the use of AI was bringing firms into.
“Our legal and judicial structures have no charted path for a lot of this,” Bessant said. “We need to ask the right questions and see to it that deployment of AI doesn’t get ahead of the structures and the infrastructure needed to support it.”