Big Four firms race to develop audits for AI products

Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Big Four accountancy firms are racing to create a new type of audit that verifies the effectiveness of artificial intelligence tools as they seek to profit from clients’ demand for proof that their AI systems work and are safe.
Deloitte, EY and PwC told the Financial Times that they were preparing to launch AI assurance services as they hope to use reputations gained in financial audits to win work assessing whether AI systems, such as those in self-driving cars and cancer-detecting programmes, perform as intended.
The audits would open a revenue stream for auditors, similar to when the firms cashed in on the trend for companies to buy assurance for their environmental, social and governance metrics. The move comes as some insurers have begun offering cover for losses caused by malfunctioning AI tools such as customer service chatbots.
The audit firms hope demand for the new AI assurance services will be buoyed by the need for greater trust in the technology and companies’ desire for confirmation they are complying with regulations.
AI assurance was “critical” to AI adoption, said Richard Tedder, audit partner at Deloitte.
“[Companies] will want assurance over the AI they use to manage other critical functions,” he said. “Individual consumers will also want assurance . . . if they start using AI services to manage critical parts of their lives, such as their health or finances.”
PwC UK will launch AI assurance “soon”, said Marc Bena, chief technology officer for audit. The firm already does work assessing “specific client tools, such as checking whether chatbots are answering questions accurately and identifying issues such as bias”, he added.
The Institute of Chartered Accountants in England and Wales, a professional body, last month held its first conference on the topic, as big accounting firms attempt to shape the emerging field and avoid losing out to nimble start-ups.
But Pragasen Morgan, EY’s UK technology risk leader, cautioned that developing AI assurance systems could take time, particularly because of the large potential liabilities for audit firms if an assured AI product did not work as expected.
“We are still quite a way away from being able to say that we are unequivocally giving assurance of an AI model,” he said, noting that because models continue to ingest data and develop over time, they will not always react the same way in a given scenario.
“So giving complete assurance over that is not something that I think we would be ready for, and neither would any of the other Big Four firms . . . just yet,” Morgan said.
Hundreds of firms in the UK already supply forms of AI assurance, according to government research. But this supply of AI assurance is mostly provided by AI developers themselves, raising concerns about independence.
Unlike audits of financial statements, there is a lack of standardisation in the fledgling AI assurance market, meaning the level of verification provided varies significantly. Some “assurance” can be light-touch advice, or limited to checking that the AI complies with one particular piece of legislation.
Research by the UK’s Department for Science, Innovation and Technology has found that demand for AI assurance was higher in sectors such as financial services, life sciences and pharmaceutical companies.