Where To Focus on AI-Related Audits

When it comes to artificial intelligence risk audits, regular over rare is the way to go.

C-suite officers are paid big bucks to ensure they’re getting the most from their AI investments, even though that means making the tough calls of risk assessment from a place of relative inexperience.

That’s the case with audit coverage, which corporate decision-makers are using to measure artificial intelligence risk in 2024.

“As organizations increase their use of new AI technology, many internal auditors are looking to expand their coverage in this area,” said Thomas Teravainen, research specialist with the Gartner for Legal, Risk & Compliance practice. “There’s a range of AI-related risks that organizations face from control failures and unreliable outputs to advanced cyberthreats. Half of the top six risks with the greatest increase in audit coverage are AI-related.”

AI Audit Risks Are Skyrocketing in 2024

That’s the takeaway from a new Gartner study on AI risk and audit coverage released in late March.

In it, the Stamford, Conn.-based business analysis company tracked 102 chief audit executives (CAEs) in August 2023 to weigh the value and downside of over 35 companywide risks.

Of the top six risks with the greatest potential audit coverage increases, three are artificial intelligence issues: AI-enabled cyberthreats, AI control failures, and unreliable outputs from AI models (the other three top risks included strategic change management, diversity equity and inclusion, and organizational culture).

The biggest AI-related auto “risers” from 2023 to 2024 in the study rank as follows:

  • AI-enabled cyber threats. Audits are up from 10% in 2023 to 52% in 2024.
  • AI-controlled failures. Audits are up from 10% in 2023 to 51%in 2024.
  • Unreliable AI outputs. Audits are up from 14% in 2023 to 42% in 2024.

“Perhaps the most striking finding from this data is the degree to which internal auditors lack confidence in their ability to provide effective oversight on AI risks,” Teravainen noted. “No more than 11% of respondents rating one of those above three top AI-related risks as very important considered themselves very confident in assuring it.”

The primary problem is that, no matter if a generative AI application was purchased off the shelf or built inside the company, the risk variables are still high and need to be checked early and often. “(Risks) on data and information security, privacy, IP protection, and copyright infringement, as well as trust and reliability of outputs” remain a threat to “reputational damage or potential legal action,” Gartner said.

With confidence in AI-related company risks ebbing, Gartner said the pivot toward regular adults assessing risk isn’t a temporary strategy—it’s here to stay.

“With such a broad array of potential risks coming from all over the business, it’s easy to understand why auditors aren’t confident about their ability to apply assurance,” said Teravainen. “However, with CEOs and CFOs rating AI as the technology that will most significantly impact their organizations in the next three years, continued gaps in confidence will undermine chief audit executives’ ability to meet stakeholder expectations.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *