CCRcorp Sites  

The CCRcorp Network unlocks access to a world of insights, research, guides and information in a range of specialty areas.

Our Sites

TheCorporateCounsel

TheCorporateCounsel.net

A basis for research and practical guidance focusing on federal securities laws, compliance & corporate governance.

DealLawyers

DealLawyers.com

An educational service that provides practical guidance on legal issues involving public and private mergers & acquisitions, joint ventures, private equity – and much more.

CompensationStandards

CompensationStandards.com

The “one stop” resource for information about responsible executive compensation practices & disclosure.

Section16.net

Section16.net

Widely recognized as the premier online research platform providing practical guidance on issues involving Section 16 of the Securities Exchange Act of 1934 and all of its related rules.

PracticalESG

PracticalESG.com

Keeping you in-the-know on environmental, social and governance developments

Yes, the headline is a throwback to the golden era of TV. Perhaps you remember another throwback to the beginning of this year when I wrote a series of blogs on using chatGPT in various ESG applications. One installment focused on its dreadful ability to create and exacerbate fraud, another revealed significant shortcomings and limitations in relying on the technology to produce corporate ESG/sustainability reports and one drilled into how it produced information on environmental regulations (that was actually a two-parter).

On LinkedIn, that post generated some discussion – mostly supporting the use of AI/chatGPT in ESG. To be fair, it seemed most commented on the potential value of using AI in the future. Let’s be honest, though – there is a real temptation to use it now. However, some news hit the fan last week that shows just how dangerous (professionally) that could be.

A number of media outlets picked up the story, but let’s look at this one from CBS News. Steven A. Schwartz of the law firm Levidow, Levidow & Oberman submitted a brief to a New York court that included legal research done by chatGPT in a lawsuit filed on behalf of himself against the airline Avianca. Not a great idea:

“…the AI invented court cases that didn’t exist, and asserted that they were real.

The fabrications were revealed when Avianca’s lawyers approached the case’s judge, Kevin Castel of the Southern District of New York, saying they couldn’t locate the cases cited in Mata’s lawyers’ brief in legal databases.

The made-up decisions included cases titled Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines…

Schwartz responded in an affidavit last week, saying he had ‘consulted’ ChatGPT to ‘supplement’ his legal research, and that the AI tool was ‘a source that has revealed itself to be unreliable.’ He added that it was the first time he’d used ChatGPT for work and ‘therefore was unaware of the possibility that its content could be false.’ 

He said he even pressed the AI to confirm that the cases it cited were real. ChatGPT confirmed it was. Schwartz then asked the AI for its source. 

ChatGPT’s response? ‘I apologize for the confusion earlier,’ it said.”

Piling on criticism of the technology, The Washington Post published an article earlier this week about chatGPT’s propensity to “hallucinate”:

“’Language models are trained to predict the next word,’ said Yilun Du, a researcher at MIT who was previously a research fellow at OpenAI… ‘They are not trained to tell people they don’t know what they’re doing.’ The result is bots that act like precocious people-pleasers, making up answers instead of admitting they simply don’t know.”

It is unclear what types or levels of data validation or other controls are embedded in chatGPT’s algorithm. Given its emphasis on gathering, processing existing data, governance/validation probably hasn’t been given due consideration. Hopefully that will change and we can all breathe a little easier in the future about relying on ESG information from AI. It should also be able to offer confirmation when prompted, as Schwartz reportedly tried to do. The Washington Post piece touched on a potential starting point for using AI to validate itself:

“The researchers proposed using different chatbots to produce multiple answers to the same question and then letting them debate each other until one answer won out. The researchers found using this ‘society of minds’ method made them more factual.”

Then there are human existential warnings about the use of AI from those who invented it:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Danger, indeed. For the near future, I think it would be unwise to heavily use chatGPT in an ESG context without independently validating anything it produces (i.e., conducting research outside of chatGPT).

Back to all blogs

The Editor

Lawrence Heim has been practicing in the field of ESG management for almost 40 years. He began his career as a legal assistant in the Environmental Practice of Vinson & Elkins working for a partner who is nationally recognized and an adjunct professor of environmental law at the University of Texas Law School. He moved into technical environmental consulting with ENSR Consulting & Engineering at the height of environmental regulatory development, working across a range of disciplines. He was one… View Profile