This past Tuesday was Day 1 of the PracticalESG Virtual Event Series and perhaps the highlight of the day was a lively panel on AI – what it is and how it might be applied in ESG. Panelist Tyson McDowell of Greatscale Ventures talked about difficulties in auditing both the data on which AI is trained and the AI algorithm itself. Others have identified the same risk. This piece in CFO Dive last week discussed concerns coming out of a survey conducted by the Center for Audit Quality (CAQ) that included the use of AI in accounting. Among the 12 audit risks identified from the use of artificial intelligence in financial reporting:
“Auditors for companies that use generative AI often confront a ‘black box’ challenge when they can neither interpret nor explain how the technology generates information, CAQ said. The problem grows when ‘financial reporting processes and ICFR [internal control over financial reporting] become more sophisticated and outputs from the technology are unable to be independently replicated.'”
This is a significant problem in any context, but ESG, social and climate risks already suffer from a lack of clarity. Producing or relying on ESG information/data that can’t be clearly explained and verified is a big risk. Other hazards from generative AI from the CAQ survey include:
- Governance – the failure to identify and manage AI applications throughout a company;
- Regulation – use of generative AI in ways that violate regulations, laws or contracts;
- Skills – employees lack the knowledge to oversee or use generative AI effectively and safely;
- Fraud – management, employees or third parties use generative AI to commit or conceal crimes;
- Data privacy – confidential data is erroneously entered into a generative AI application;
- Security – generative AI is vulnerable to cyberattacks, the intentional insertion of flawed data, or deliberate efforts to prompt bogus conclusions from the applications;
- Flawed selection or design – the choice of a generative AI application that does not achieve the desired objective;
- Error-prone foundation model – the company adopts an unreliable large language model that generates inaccuracies or biased information;
- Flawed training – faulty training of the generative AI model generates repeated output errors;
- Weak performance – due to inadequate testing, the generative AI application “hallucinates,” or provides incomplete, inaccurate, unreliable or irrelevant information;
- Defective prompts – employees fail to ask generative AI accurate questions, yielding unintended or irrelevant information;
- Inadequate monitoring – after deploying generative AI, companies fail to closely track output to ensure the technology is functioning as intended.
These risks make auditing/assurance of AI-generated data, information and disclosures a challenge to put it mildly. Yet another thing to consider when evaluating when/how to use AI in ESG.
If you aren’t already subscribed to our complimentary ESG blog, sign up here: https://practicalesg.com/subscribe/ for daily updates delivered right to you.