Misperceptions About AI: A Response

Posted by Hamish Macalister

The Financial Times reported last week on some AI system tests undertaken by EY, one of the Big Four Accounting firms. EY tested the system on 10 audit clients from earlier in the year and found suspicious activity in two. In both cases, the clients involved confirmed the existence of fraud. 

Considering what our product does, we are not surprised by the efficacy of AI in these tests, which should be expected for a well-trained system. We are, however, somewhat taken aback by the response to the news from other auditors and from readers.

While EY sees potential in AI-augmented auditing as a “co-pilot” for auditors (well done EY!), other audit firms are more sceptical. According to the article, some are worried about data privacy, while others worry that there is too much diversity in fraud relative to the amount of data available to train systems properly.  

Privacy concerns

In our experience, concerns about privacy are ill-placed. There is more than enough data in thousands of public company records to thoroughly train an AI system. 

There is also more than enough diversity in public data to spot unique events. While every instance of fraud is unique, all fraud serves the same purpose: To inflate revenue, shrink expenses, hide liabilities, or simply embezzle funds. Whenever a firm does one of these things, tell-tale evidence is always present, albeit in different combinations. 

Imagine you arrive home and the front door is ajar. Would you feel suspicious? What if an SUV is backed into the driveway and hitched to your caravan? Suppose the lights are on upstairs?

 

Or perhaps a Chinook helicopter is hovering overhead in the backyard? I bet that has never happened before. 

Yet, if you saw it, I bet you would be suspicious. Let’s hope they aren’t lifting your new jacuzzi! All of these telltales are obvious. 

 

AI image of a Chinook helicopter hovering over a suburban neighborhood.

To a well-trained machine, so are any of the hundreds of possible flags pointing to accounting fraud, especially when present in combinations such as abnormal cash generation combined with abnormal asset purchases. 

Secret CFO

To a machine trained on millions of data points, telltales more subtle than this are also obvious. They may not be obvious to the human eye but they make more noise than a Chinook to an AI system.

Most of the readers’ comments on the FT article display too little knowledge of AI. However, there is one I can’t let pass from “Secret CFO,” further down the comments section.

According to Secret CFO:  “I believe that a well-trained AI system can stop some typical accounting frauds. However there is a big catch: These are likely to be the relatively small-fry direct personal enrichment frauds perpetrated by lowish to mid-level people, for example creating one or two fake suppliers (and making payments to them), large expense-fiddling, etc.” 

The CFO continues: “However the Enrons, Wirecard, Patisserie Valerie, etc. are not going to get detected in this way as each fraud is somewhat unique and includes thousands of accounting entries made by many people… In each case the issue is not so much that accounting entries were made technically improperly but that there was no substance behind them. Much harder to detect.”

Yes, they are harder to detect. But a sophisticated AI system can still spot them. I would like to refer you to the following links to see how well an AI system did anticipate fraud at Enron, Wirecard, and Patisserie Valerie.