top of page

AI’s Impact on Audits

AI has the potential to dramatically impact both the speed and the quality of audits, but auditor judgment is still central.

AI empowering auditor looking forward

ChatGPT has taken the world by storm and garnered many different reactions. Some people are dismissive of the technology, claiming that it is irrelevant because it can hallucinate or because it’s not as good as the top professionals in a given field. Others view AI as a panacea that will solve every problem. In this article, we will take a look at how AI can make an impact on audits today, given its limitations, and we’ll discuss what considerations are necessary to ensure we can get the most out of it in the context of an audit. Specifically, we’ll talk about the tremendous potential for improving both speed and quality, as well as two must-have features for any AI-enabled system to be useful to auditors: source citation and the ability to override the AI’s answer.

Improved speed

It should be obvious to everyone that AI is fast. It often takes significantly longer to formulate your question to ChatGPT than it does to get a lengthy answer back. This speed alone presents a huge opportunity for the profession, especially considering the staff shortages firms are facing and the compressed timeline for most audits. In addition to raw speed, AI can also operate in parallel. For example, with sample testing, each sample can be queried simultaneously, dramatically reducing the time spent.

One of the most overlooked use cases is identifying exceptions early. Instead of testing each sample one by one, and performing each audit test one by one, all evidence can be examined immediately when the client provides it. The AI highlights potential issues and the associate then looks at those flagged items first. This allows the team to immediately address any issues found in the evidence provided by the client. Without this, what often happens is that the associate doesn’t begin looking at the evidence until days or even weeks after it was originally provided. If there are follow-ups, the client is frustrated because they weren’t made aware sooner or, even worse, the client has moved on (another project, vacation, etc) and is struggling to find the time to respond.

Enhanced quality

Because the AI can analyze so much data at once, it’s pulling information from documents that a human could easily miss or gloss over. Oftentimes when reviewing a document, auditors are looking for the answer to their question, not for contrary information buried in another section that could change the conclusion. AI can identify these critical details and surface that information quickly and reliably. The AI doesn’t get tired during busy season or distracted by a client email and miss a potential issue. In this way, the AI is like spell check: it doesn’t replace the author, but it is useful in improving quality.

But what about auditor judgement?

So far, we’ve focused on what AI can do better than auditors, but AI does not replace auditors. Ultimately, the client is paying for the auditor’s opinion and that means auditor judgment is still crucial. Consequently, all output from AI should be reviewed and validated by an experienced auditor. This isn’t a new concept, it’s how audits work today. For instance, an intern's work is verified by the associate, whose work, in turn, is reviewed by the senior associate, and so on. Integrating AI into an audit process is like employing an “Automated Intern”. While their work still requires review, the increased speed and quality make AI an invaluable tool. Because the output must be reviewed, any AI-enabled audit system must have two important features: it must cite sources and provide the auditor with the ability to override the answer.

Citing sources

When reviewing the output, it’s important for AI systems to cite sources and provide the rationale for the answers they provide. This additional information streamlines the review of the work and gives the auditor the context they need to decide whether or not that answer is correct. Additionally, the AI should raise a ‘flag’ if it’s unsure of an answer, which is possible because these models operate internally using probability distributions, and those probability distributions can be leveraged to determine how confident the AI is in the answer it provides. When reviewing an answer, the auditor can use this additional information to pay special attention to answers the AI is unsure about. As an example, here's what that additional rationale and citation looks like in UpLink:

AI provides additional rationale for an answer

Override the answer

The second must-have feature is the ability to override the answer generated by the AI. It is important to keep in mind that AI isn’t always going to be right. For example, with UpLink, we have a 90-95% accuracy depending on the type of question and document. Ultimately, the auditor is responsible for the conclusion and they must have the ability to override any AI-generated answer during their review. Here's what that looks like in UpLink:

editing an AI proposed answer


In conclusion, AI unlocks tremendous potential for the audit profession, but auditors themselves and their professional judgment are still crucial and will be for the foreseeable future.


bottom of page