The SEC held a roundtable on artificial intelligence March 27, in which experts discussed the potential risks and benefits of using the technology in financial services.
In July 2023, the SEC issued a rule proposal, often referred to as the “predictive data analytics” proposal, that would require investment advisers and broker dealers to “eliminate or neutralize” conflicts of interest that arise from investor interactions using certain technology, such as AI. After the proposal faced strong industry pushback, the agency said it planned to repropose the rule, according to its regulatory agenda issued in July.
Acting SEC Chair Mark Uyeda and Republican Commissioner Hester Peirce both criticized the original proposal in their opening remarks at the roundtable.
Uyeda said he’s “been concerned with some recent commission efforts that might effectively place unnecessary barriers on the use of new technology.”
Peirce said the commission “fell victim to … sensationalism (of AI) when it attempted to broadly and clumsily regulate the use of predictive data analytics by broker dealers and investment advisors.”
Caroline Crenshaw, the lone Democrat on the commission, did not criticize the proposal but did acknowledge that many “felt that the scope of the proposal was not appropriate,” adding that a roundtable “would be really beneficial.”
In one panel on “The Benefits, Costs, and Uses of AI in the Financial Industry,” experts discussed everything from how to define AI to what some of its barriers to adoption are.
One of the challenges with using AI is having the confidence to say the program is wrong, said Hilary Allen, a law professor at American University Washington College of Law. “Because we have these automation bases … we tend to think anything spit out of a computer is better than what we come up with ourselves. It takes a lot to be able to say, ‘No, the machine is wrong.’”
Notably, that would require everyone “who uses AI to be smarter than their AI or know more about the area than AI,” which may be unrealistic, Allen added.
On the flip side, using AI can offer a return on investment in several ways, according to Daniel Pateiro, managing director, office of chief operating officer, strategic initiatives/artificial intelligence at BlackRock.
Aside from things like operational efficiency and revenue, “there’s also some inorganic ways of looking at the ROI too,” Pateiro said, such as the time that AI users are saving that can be spent on more complicated challenges.
Pateiro said that BlackRock is using AI for algorithmic pricing and “as an investment process augmentation tool,” which helps to achieve “optimal trading strategies which assist with achieving best execution as well as reducing transaction costs.”
In another panel on “Fraud, Authentication, and Cybersecurity,” experts warned of the ways that AI is helping make things easier for fraudsters.
Fraudsters often reach out to their victims to build trust before asking for money, and “one of the ways that AI helps do that is by utilizing technologies such as deep fakes to potentially change an appearance as part of a video conversation, to change a voice, to even help assist with how those early communications may come across if there's a potential language barrier,” said Kristen McCooey, chief information security officer at Edward Jones. “So, they're using AI to help build a persona that's more appealing to their victim and … that the victim finds trustworthy.”