This is funny, but not.
Perhaps you remember the blog series I did not long ago about dangers in using chatGPT in different aspects of ESG reporting, monitoring and management. Maybe you even recall the surprise ending: “In my final challenge to chatGPT, I tested its humor and comprehension of irony by asking when Skynet will become self aware.” For a quick reminder, SkyNet is the fictional computer system in the Terminator movie series that determined humans were a threat to its existence and built Terminator robots to eliminate humans from the planet.
Seems like I am not alone in questioning whether the threat of AI to human existence is worrisome. Bloomberg’s Matt Levine mused about this question in one of his daily emails last week:
“One way to think about environmental, social and governance investing is that it is a way for diversified investors to pay attention to broad systemic risks — climate change, poor governance — in their portfolios. But not all systemic risks, just a somewhat arbitrary list of them. Climate change: an ESG risk. Nuclear war: ehh, maybe an ESG risk, kind of? Malign artificial intelligence running wild and destroying humanity: apparently not an ESG risk.
Nvidia Corp.’s [the company that makes the computer ships on which AI technology is based] stratospheric ascent has lured at least 100 more ESG funds in recent weeks, transforming the company into one of the most popular stocks among asset managers who integrate environmental, social and governance metrics into their investment strategies.
There are now over 1,400 ESG funds directly holding Nvidia, according to data compiled by Bloomberg based on the latest filings. …”
Levine concludes:
“It is funny to imagine some intermediate phase where artificial intelligence has not enslaved or destroyed humanity, but where some AI robots are killing or enslaving some humans. (Or, like, AI just puts a lot of humans out of work and causes social problems?) In that phase I suppose people would still be investing and AI companies would still be investable and profitable (?); some investors would buy their stocks because they make money, while other socially responsible investors would say ‘no, AI is bad for humanity so I am going to stay away from those stocks.’ But right now AI is, for ESG investors, a pure good.”
Some might argue ESG criticism of Nvidia is unjustified. “Hey, it isn’t really fair or appropriate to blame Nvidia for how people use their products – the company is just filling a market need and they can’t really control what their customers do with the products.” Or maybe – “Nvidia doesn’t know what the future looks like. I mean, there are some good applications for their products so it’s okay to invest in them.” And I might say “Yeah, okay. Fair points, some.”
But what if we replaced “Nvidia” in those sentences with “Exxon”or “Peabody Energy”: would those arguments see the same support? Probably not, yet conceptually the same fundamental question applies of third party use of their products and long term impact on human existence.
Investors and consumers face tough choices on defining for themselves the boundaries of ESG for their own purposes. Unfortunately, companies are smack-dab in the middle of that and are expected to identify the multitude of stakeholder ESG concerns as well as respond to them. To do this, companies need internal commitment and cooperation, business and stakeholder guidance, competent staffing and tools to do the job. It also requires oversight and monitoring to make sure efforts are valid, credible and on-point. PracticalESG.com was developed to help companies and professionals focus on pragmatic and meaningful solutions to the challenges, filtering out the fluff, hype and theoretical. We help folks think about what ESG means in their companies.
For now, though – I’m going to go back and watch the entire Terminator series to bone up on how to kill those robots.