Ed. note: This is final part of a series of articles on our experiences using AI platform chatGPT to write and answer questions about ESG. The first installment focused on seeing if the AI app would copy my writing style in producing a blog article. For the second installment, I asked the bot to write a corporate ESG report. I also wrote a two-parter about how chatGPT managed a regulatory inquiry on EPA/state environmental mandates and the SEC climate disclosure proposal.
I’ve completed numerous experiments applying chatGPT to ESG matters, only a few which I wrote about. The others turned out to be far less interesting than I hoped and provided no meaningful insights. By now, many experts and journalists have done their own things to put the bot on the spot – including this very creative piece by GreenBiz’s Joel Makower yesterday. A tip of the Stetson to Joel there.
As for my views, I am a little conflicted. Yes, I know it is only a couple months old and it will rapidly improve. People are already using it regardless of my thoughts on the bot – and they will continue to do so. My expectations were far exceeded by its overview of the SEC climate proposal which it was not supposed to know about, but it didn’t do particularly well with a 30-year old environmental regulation.
Even so, my experience (as well as that of others) clearly shows humans must still intervene to make corrections, seek clarifications/detail and eliminate repetition in chatGPT’s writing – especially for longer narratives. It still only answers the exact question posed to it, and doesn’t generally offer helpful context that is implied in a user question/command nor supporting information like regulatory citations. I am also not confident about the origin of its information (criticize Google all you want but the source of information is always clear, although you are still on your own in assessing the validity/credibility of it).
In my final challenge to chatGPT, I tested its humor and comprehension of irony by asking when Skynet will become self aware. It responded:
“Skynet is a fictional artificial intelligence system that features in the Terminator franchise. The movies and TV shows in which Skynet appears depict it as becoming self-aware and initiating a nuclear war, leading to the near extinction of humanity. However, Skynet is not a real AI system, and the idea of a computer or a machine becoming self-aware is a topic of ongoing debate among scientists and researchers in the field of artificial intelligence. Some experts believe that it may be possible for machines to achieve a form of consciousness in the future, but it is not yet clear when or if this will happen. Additionally, the idea of a machine or AI becoming self-aware and taking actions that are harmful for humanity is a science fiction scenario, not a current or near future reality.”
I give this a B for accuracy and an F for its ability to recognize or acknowledge irony.
Will I use chatGPT in the future? Possibly on a limited basis and with the expectation that meaningful editing and fact-checking will be required.
Then there is this – a Time report from last month on the human laborers behind the scenes at chatGPT. According to Time’s article, OpenAI – chatGPT’s owner – contracted with a company to supply laborers in Kenya, Uganda and India to process data, paying them “a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.” The laborers were required to view and classify images and text from “the darkest recesses of the internet,” leaving some workers to claim they are now mentally scarred.
Maybe one of you can ask chatGPT about that and let me know how what response you receive.