Hedge funds using artificial intelligence to inform trading decisions poses risks to investors and financial markets, which is why regulators in Washington should promulgate new rules and guidance to patch existing gaps, according to a report from the Senate Homeland Security & Governmental Affairs Committee.
“As more hedge funds and other investment vehicles use AI, and as AI’s development and potential use cases advance, the risks identified in this report will increase,” a committee report released by Chair Gary Peters, D-Mich., noted. “Congress and regulators need to ensure the public has a better understanding of how current regulations apply to AI technology and establish baseline guardrails applicable to all, to address risks related to the use of AI technology in the financial services sector.”
As part of a committee investigation, staff received information from six hedge funds, each with different structures and which utilize AI in different ways: Citadel, Renaissance Technologies, Bridgewater Associates, AI Capital Management, Numerai and WorldQuant, according to the report.
Hedge funds use AI to inform aspects of trading decisions such as pattern identification and portfolio construction, the committee staff report noted. But hedge funds use different terms to name and define their AI-based systems and do not have uniform requirements or an understanding of when human review is necessary in trading decision, staff added.
“All six hedge funds majority committee staff spoke with said that humans review their AI systems and trading decisions,” staff said. “However, some hedge funds largely rely on these systems, while others told majority committee staff they believe human intuition is required when making trading decisions, and none defined a specific point in time where that intervention must exist.”
With respect to regulators, committee staff were told that existing regulations and obligations apply to hedge funds and investment advisers’ use of AI.
“However, it is unclear how the existing framework specifically applies to the use of AI,” the report said.
Also, the public lacks clarity on the degree and scope of risks related to the use of AI and machine learning strategies in the financial sector, according to the report.
The committee made seven recommendations to improve the current landscape, including calling on the SEC and Commodity Futures Trading Commission to define guidelines and standards for how hedge funds name and refer to trading systems that utilize AI, and create operational baselines for the use of AI by hedge funds to inform trading decisions.
Moreover, the SEC and CFTC should continue to examine potential gaps in regulations and propose rules to address unique concerns posed by AI and AI-related technologies and require hedge funds to audit AI trading systems on a standardized basis, the report said.
Separately, companies should more clearly disclose to their investors what AI technology is used and for what purposes, the report recommended.
“As hedge funds and the financial sector at large increasingly use AI to inform trading decisions, it is critical that there are safeguards in place to ensure the technology is being used in a way that minimizes potential risks to individuals and to market stability itself,” Peters said in a statement. “My report and recommendations will help encourage responsible development, use, and oversight of AI across the financial industry by identifying needed reforms to establish a cohesive regulatory framework.”
Bryan Corbett, president and CEO at the Managed Funds Association, took issue with the report.
"This partisan report fails to recognize long-accepted current market practices which serve investors and enhance market efficiency," he said in a statement. "It does not accurately portray how the alternative asset management industry uses emerging technologies like AI. The overtly-political report spreads misinformation about an entire industry, is not representative of the industry today, and sets back public discourse around an important topic."