UK Proposes AI Regulatory Guidelines With CFOs Playing a Big Role

Corporate finance needs to “build trust” with artificial intelligence.

As more companies invest heavily in artificial intelligence, more finance industry think tanks are weighing in on how firms should use AI.

The latest is the UK-based Association of Chartered Certified Accountants. It released its take on AI regulatory guidance via a white paper titled “A Pro-Innovation Approach to AI Regulation.”

The ACCA wrote the report with the global professional services firm EY.

In it, the ACCA said the UK government should work “to secure international co-operation and interoperability while ensuring that on the domestic front, there is a high level of joined-up thinking across sectors and stakeholders (government, business, regulators and civil society) to avoid duplication” between different regulators.

The report also calls for company CFOs to take “immediate steps” to leverage AI’s computing powers while setting the table for “responsible” use. The ACCA offered these specific AI guidance calls in the white paper.

Investing in AI Literacy for Finance Teams: CFOs must prioritize education and training to enable finance professionals to evaluate AI outputs critically, communicate effectively with stakeholders, and make well-informed decisions driven by AI insights.
Fostering Cross-Functional Collaboration: Finance teams should actively engage with IT, data science, legal, and risk management departments to establish cohesive AI governance.
Developing Robust AI Governance Frameworks: Starting with critical financial applications, CFOs need to spearhead efforts to establish clear AI policies, oversight mechanisms, and best practices within their organizations.

CFOs also need to foster more trust and stability whenever possible with AI.

“Introducing AI is about building trust in both the systems and the people operating them,” said Alistair Brisbourne, ACCA’s head of technology research. “CFOs must focus on upskilling teams, implementing effective governance, and fostering a culture of cross-functional collaboration.”

Mitigating Risk

The ACCA believes companies may be “overdependent” on AI, and governing bodies and companies need to address this issue.

“As AI capabilities advance, there is a risk of finance teams becoming overly reliant on these systems, leading to a decline in human judgment and oversight,” the paper stated. “CFOs must be vigilant in striking the right balance between leveraging AI and maintaining crucial human involvement in financial processes.”

The report particularly pointed to multiple potential roadblocks for CFOs dealing with AI-related “fallout issues.”

An “overreliance” issue. AI models aren’t immune from “inherent biases or limitations,” the AVVA points out. “An overreliance on AI outputs could lead finance professionals to abandon the critical thinking and professional skepticism essential for effective financial reporting and auditing,” the paper states.

A “knowledge” issue. Overreliance on AI also curbs risks of lower institutional knowledge within finance teams. “Human expertise, experience, and qualitative insights are invaluable assets that an excessive focus on AI-driven automation should not overshadow,” the ACCA adds.

An “accountability” issue. As AI grows in stature, especially as a decision-making power, understanding who’s accountable for corporate finance decisions lacks clarity. “Clear lines of responsibility and human oversight must be maintained to ensure proper governance and adherence to reporting standards,” the report notes.

Action Steps


The ACCA recommends that CFOs take the following action steps to better acknowledge and account for both these issues (and likely many more as AI expands in the workplace).

1. Maintain Human-in-the-Loop Processes: Establish protocols that require human review and validation of critical AI outputs, particularly in risk assessment, fraud detection, and financial forecasting.
2. Continuous Training and Development: Invest in ongoing training programs to ensure finance teams understand the capabilities, limitations, and potential biases of AI systems, and can effectively interpret and challenge AI-generated insights.
3. Robust Documentation and Explainability: Implement robust documentation practices that clearly outline the logic and assumptions behind AI models used in financial processes. Explainable AI techniques can help finance teams understand how decisions are being made, enabling more effective oversight.
4. Cross-Functional Governance: Collaborate with IT, data science, and risk management teams to establish clear governance frameworks that define roles, responsibilities, and accountability measures for AI use in finance.

Adopting these measures gives CFOs a chance to pull the lens back, examine what’s really going on with AI inside their companies and begin working with public policy makers to make AI accountable and limit its most potentially negative tendencies as companies ramp up their digital technology efforts.

“AI features strongly in the public consciousness due to its potential to introduce new opportunities and previously unseen risks,” says Helen Brand OBE, chief executive of ACCA. “We support multi-stakeholder feedback and the desire to build bridges internationally.”

“While more prescriptive guidance may be required in some areas to clarify accountability, we agree with a flexible approach to managing the fast-paced development of AI,” Brand adds.


Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *