CCRcorp Sites  

The CCRcorp Network unlocks access to a world of insights, research, guides and information in a range of specialty areas.

Our Sites


A basis for research and practical guidance focusing on federal securities laws, compliance & corporate governance.


An educational service that provides practical guidance on legal issues involving public and private mergers & acquisitions, joint ventures, private equity – and much more.


The “one stop” resource for information about responsible executive compensation practices & disclosure.

Widely recognized as the premier online research platform providing practical guidance on issues involving Section 16 of the Securities Exchange Act of 1934 and all of its related rules.


Keeping you in-the-know on environmental, social and governance developments

This past Tuesday was Day 1 of the PracticalESG Virtual Event Series and perhaps the highlight of the day was a lively panel on AI – what it is and how it might be applied in ESG. Panelist Tyson McDowell of Greatscale Ventures talked about difficulties in auditing both the data on which AI is trained and the AI algorithm itself. Others have identified the same risk. This piece in CFO Dive last week discussed concerns coming out of a survey conducted by the Center for Audit Quality (CAQ) that included the use of AI in accounting. Among the 12 audit risks identified from the use of artificial intelligence in financial reporting:

“Auditors for companies that use generative AI often confront a ‘black box’ challenge when they can neither interpret nor explain how the technology generates information, CAQ said. The problem grows when ‘financial reporting processes and ICFR [internal control over financial reporting] become more sophisticated and outputs from the technology are unable to be independently replicated.'”

This is a significant problem in any context, but ESG, social and climate risks already suffer from a lack of clarity. Producing or relying on ESG information/data that can’t be clearly explained and verified is a big risk. Other hazards from generative AI from the CAQ survey include:

  • Governance – the failure to identify and manage AI applications throughout a company;
  • Regulation – use of generative AI in ways that violate regulations, laws or contracts;
  • Skills – employees lack the knowledge to oversee or use generative AI effectively and safely;
  • Fraud – management, employees or third parties use generative AI to commit or conceal crimes;
  • Data privacy – confidential data is erroneously entered into a generative AI application;
  • Security – generative AI is vulnerable to cyberattacks, the intentional insertion of flawed data, or deliberate efforts to prompt bogus conclusions from the applications;
  • Flawed selection or design – the choice of a generative AI application that does not achieve the desired objective;
  • Error-prone foundation model – the company adopts an unreliable large language model that generates inaccuracies or biased information;
  • Flawed training – faulty training of the generative AI model generates repeated output errors;
  • Weak performance – due to inadequate testing, the generative AI application “hallucinates,” or provides incomplete, inaccurate, unreliable or irrelevant information;
  • Defective prompts – employees fail to ask generative AI accurate questions, yielding unintended or irrelevant information;
  • Inadequate monitoring – after deploying generative AI, companies fail to closely track output to ensure the technology is functioning as intended. 

These risks make auditing/assurance of AI-generated data, information and disclosures a challenge to put it mildly. Yet another thing to consider when evaluating when/how to use AI in ESG.

If you aren’t already subscribed to our complimentary ESG blog, sign up here: for daily updates delivered right to you.

Back to all blogs

The Editor

Lawrence Heim has been practicing in the field of ESG management for almost 40 years. He began his career as a legal assistant in the Environmental Practice of Vinson & Elkins working for a partner who is nationally recognized and an adjunct professor of environmental law at the University of Texas Law School. He moved into technical environmental consulting with ENSR Consulting & Engineering at the height of environmental regulatory development, working across a range of disciplines. He was one… View Profile