CCRcorp Sites  

The CCRcorp Network unlocks access to a world of insights, research, guides and information in a range of specialty areas.

Our Sites


A basis for research and practical guidance focusing on federal securities laws, compliance & corporate governance.


An educational service that provides practical guidance on legal issues involving public and private mergers & acquisitions, joint ventures, private equity – and much more.


The “one stop” resource for information about responsible executive compensation practices & disclosure.

Widely recognized as the premier online research platform providing practical guidance on issues involving Section 16 of the Securities Exchange Act of 1934 and all of its related rules.


Keeping you in-the-know on environmental, social and governance developments

Not everyone agrees with my anti-AI stance in ESG. Fair enough, but even AI supporters need to recognize limitations and other risks in relying on AI. This memo from Arnold & Porter covers one specific example from what might be viewed as an unusual source:

“On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations arising from their use of artificial intelligence (AI). Training on data reflecting societal biases; data sets that are incomplete, inaccurate, or nonrepresentative; algorithms specifying variables unintentionally correlated with protected characteristics; and other problems can produce discriminatory results.

… because AI use also carries risks of violating fair lending laws and perpetuating disparities in credit transactions, Vice Chair Barr called it ‘critical’ for regulators to update their applications of the Fair Housing Act (FHA) and Equal Credit Opportunity Act (ECOA) to keep pace with these new technologies and prevent new versions of old harms. Violations can result both from disparate treatment (treating credit applicants differently based on a protected characteristic) and disparate impact (apparently neutral practices that produce different results based on a protected characteristic).” 

There are very real questions about AI’s validation, governance and underlying data controls and how those may perpetuate biases and fraud embedded in the technology’s data universe. To illustrate gaps in ChatGPT’s controls, consider this from Tuesday’s The Economist about new bombs being developed in Ukraine:

“Some ‘candy shops’ use software to model the killing potential of different shrapnel types and mounting angles relative to the charge, says one soldier in Kyiv with knowledge of their efforts. ChatGPT an AI language model, is also queried for engineering tips (suggesting that the efforts of OpenAI ChatGTP’s creator, to prevent these sorts of queries are not working).”

Companies planning on – or already – using AI for any aspect of ESG data collection or analysis must be aware of the limitations and potential risks of doing so. If you plan on relying on AI in ESG, consider doing some type of due diligence on data sources and learn as much as you can about the algorithm’s validation, governance and underlying data controls.

Back to all blogs

The Editor

Lawrence Heim has been practicing in the field of ESG management for almost 40 years. He began his career as a legal assistant in the Environmental Practice of Vinson & Elkins working for a partner who is nationally recognized and an adjunct professor of environmental law at the University of Texas Law School. He moved into technical environmental consulting with ENSR Consulting & Engineering at the height of environmental regulatory development, working across a range of disciplines. He was one… View Profile