Congress, Treasury Look to Team Up On AI “Risk” Legislation

This year, the financial sector should expect some movement in Washington, D.C., on artificial intelligence risk guardrails.

The disruption risk from artificial intelligence is a significant enough threat that Washington D.C.’s leading power players are banding together with legislation protecting U.S. financial markets from artificial intelligence risks.

“I think the [Biden] administration would welcome a congressional initiative in this area,” U.S. Treasury Secretary Janet Yellen told the Senate Banking Committee at a February 8 hearing.

Earlier this year, the Financial Stability Oversight Council of regulators issued a red flag over potential regulatory and compliance abuses stemming from using AI in the financial markets. “Without proper design, testing, and controls, AI can lead to disparate outcomes, which may cause direct consumer harm and/or raise consumer compliance risks,” the council noted.

The council suggested that “oversight structures” stay ahead of emerging AI risks while promoting efficiency and innovation.

Adding that government-managed “oversight structures” be installed before AI grows too powerful, the FSOC also advised that “financial institutions, market participants and regulatory and supervisory authorities further build expertise and capacity to monitor AI innovation and usage and identify emerging risks.”

Yellen agreed with that sentiment at the Senate hearing, noting that “(2024) identified AI as a vulnerability that could create systemic risk, so we are working very hard to deepen our understanding of how that could happen and to monitor very rapidly changing developments to be able to define best practices for financial institutions.”

A New Bill to Regulate AI

On the Congressional front, Sen. Mark Warner (D-VA) and Sen. John Kennedy recently introduced a bi-partisan bill called the Financial Artificial Intelligence Risk Reduction Act that would “require the Financial Stability Oversight Council (FSOC) to lead its member agencies in responding to artificial intelligence (AI) manipulation of financial markets,” according to the legislation.

More specifically, the bill would . . .

• Mandate that FSOC coordinate financial regulators’ response to AI threats to the financial system, including “deepfakes.”
• Require FSOC to produce a report identifying gaps in existing regulations and make specific recommendations to address those gaps.
• Initiate FSOC proceedings to see its member agencies implement these changes once Congress has reviewed and commented on the report.
• Strengthen penalties when actors use AI to violate Securities and Exchange Commission (SEC) rules.
• Modernize the “intent standard” to hold AI deployers accountable when their AI violates SEC rules. It would also triple the fines and penalties for using AI in fraud, market manipulation, and other violations of Securities and Exchange Commission rules.

“AI is moving quickly, and our laws should do the same to prevent AI manipulation from rattling our financial markets. Our bill would help ensure that AI threats do not risk Americans’ investments and retirement dreams,” Kennedy said.

“AI has tremendous potential but also enormous disruptive power across various fields and industries—perhaps none more so than our financial markets. The time to address those vulnerabilities is now,” Warner added.

Warner reinforced those points to Yellen at the Senate hearing.

“Boy oh boy, if there were ever a case to look across all the regulatory entities within the financial sector, a problem like AI, I think, would be perfectly suited for FSOC,” Warner noted. “Congress should move quickly to ensure that we’ve got a comprehensive approach for both the upside and downside of AI in the financial sector.”

The legislation “is at least a good starting point for giving you the tools and, frankly, us having the guardrails,” he added.

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *