The Securities and Exchange Commission will continue its examination and enforcement focus on financial firms’ use of artificial intelligence to ensure there’s no misrepresentation of their capabilities, according to agency leaders.
“Our approach has been an agnostic one,” said Corey Schuster, co-chief of the SEC enforcement division’s asset management unit, March 6 at the Investment Adviser Association’s annual compliance conference in Washington. “It’s not about the technology that our cases have been about; it’s been about investment advisers and how they’re representing their capabilities.”
Schuster highlighted enforcement actions taken last year against investment advisers Delphia (USA) and Global Predictions for making false and misleading statements regarding their use of AI.
The motto for firms to follow Schuster said is, ‘Do what you say, say what you do.”
On the same panel, Keith Cassidy, acting director of the SEC’s examinations division, said use of AI technology among SEC registrants is on the rise.
“If firms want to leverage new technologies, they certainly should be able to, but they must balance adequate supervisory controls, disclosure and governance with the utilization of that new technology,” Cassidy said. “I think it’s really important that registrants need to understand (that) the use of their tool doesn’t eliminate their (Investment) Advisers Act duty of care or loyalty to their clients, or really any of their other legal obligations.”
Cassidy added that the examinations division will be looking at registrants’ AI claims to see whether they’re properly substantiated. “We’ll be reviewing investment advisers’ disclosures about the use of artificial intelligence for accuracy and completeness, and we’ll certainly be comparing those disclosures to the firms’ actual AI practices,” he said.
Following a question from Gail C. Bernstein, the IAA’s general counsel and head of public policy, Cassidy said the examinations divisions will also focus on firms’ AI compliance policies and procedures, as well as inspecting whether any AI models or tools used or developed by the firms have appropriate human supervision and internal controls.