I recently attended a session at the 36th Annual Texas Environmental Superconference on using AI for ESG and environmental matters. The session was excellent although it ended up being more about how to use generative AI more generally, and legal concerns surrounding such use. Readers know I am, well – let’s just say “cautious” about AI, but several valuable tips came out of this session.
- The original ChatGPT was trained on data through 2021 so it didn’t know current events, information or data. ChatGPT – and other AI systems – now use “active learning,” meaning they continuously integrate new data into their training. On the plus side, they are no longer limited by old data; however, on-going training implies high energy and water use at data centers.
- Prompts (the inquiries or searches you type in) can inadvertently contain confidential information. Examples include client legal matters and proprietary software coding where developers ask AI to improve existing code or write new code to address a proprietary matter. Be very careful in asking generative AI for help.
- It is necessary to explicitly state in your prompts “do not make up any information or data,” otherwise, the system will fill in gaps or ambiguities itself (“hallucinations”). According to one panelist, general AI algorithms tend to be inherently biased toward creative works/outputs.
- One panelist uses AI every day for hours. His advice for drafting prompts is to not type them out Instead, turn on your computer microphone and record your prompt as if you are explaining the research to a colleague. He sometimes talks into the computer for 5 minutes to create a prompt.
- This may be obvious, but the more specific and comprehensive the prompt, the better the results.
- Passwords and paywalls aren’t necessarily effective at preventing AI from getting access to data thought to be protected. As an example, the major social media companies have agreements in place with ChatGPT (and I believe other AI systems) that gives the AI access to the information on the social media websites – regardless of whether users have their accounts protected.
- Concern is growing that AI feeds itself – meaning that AI-generated output is also consumed by AI as training data. This reinforces errors, bias and other problems that are not corrected by human oversight.
Interestingly – especially in the context of this conference – there was almost no discussion about the climate and water use impacts of AI.
Sustainability leaders, staff and advisors: Generative AI can be a useful tool, but there are still pitfalls. Arguably, many of those pitfalls stem not from the technology itself, but in how humans interact with it and manage its output. With all the hype surrounding AI at the moment, that point can be lost. It is prudent to keep that top of mind when using generative AI for your own purposes.
Our members can learn more about AI and ESG here.
If you aren’t already subscribed to our complimentary ESG blog, sign up here for daily updates delivered right to you.