CCRcorp Sites  

The CCRcorp Network unlocks access to a world of insights, research, guides and information in a range of specialty areas.

Our Sites

TheCorporateCounsel

TheCorporateCounsel.net

A basis for research and practical guidance focusing on federal securities laws, compliance & corporate governance.

DealLawyers

DealLawyers.com

An educational service that provides practical guidance on legal issues involving public and private mergers & acquisitions, joint ventures, private equity – and much more.

CompensationStandards

CompensationStandards.com

The “one stop” resource for information about responsible executive compensation practices & disclosure.

Section16.net

Section16.net

Widely recognized as the premier online research platform providing practical guidance on issues involving Section 16 of the Securities Exchange Act of 1934 and all of its related rules.

PracticalESG

PracticalESG.com

Keeping you in-the-know on environmental, social and governance developments

Last week, I read a disturbing article about how much Google’s general AI platform Gemini relies on social media. There was also an interesting announcement from Perplexity that they are planning to pay media companies when their articles are used by Perplexity’s web browser. These got me wondering about data sources and weightings Gemini and chatGPT use in responding to sustainability queries. The good news is that neither platform claims to give much credibility to social media, but how they consider other sources differs meaningfully.

I’ll clarify that I didn’t ask about data on which the platforms trained to establish “patterns and relationships learned from the vast amount of text and data.” For instance, Gemini explained “While my training data includes a portion of content from social media platforms, this content is generally not treated as a source of factual authority. Instead, it’s used to learn the nuances of human language, slang, informal communication styles, and cultural context. For example, social media data helps me understand how people use language to express opinions, tell stories, and engage in dialogue.”

My questions centered on “new” information used for responding to inquiries after training (pattern recognition) is complete. Here are the highlights:

  • chatGPT was transparent and specific in responding to questions – even providing weightings and offering to create a nice graphic. Gemini was less helpful, refusing to provide “specific percentage weightings of my data sources, as this is proprietary information” or a graphic.
  • chatGPT’s weightings related to “how much each type of source tends to influence my actual answers” while Gemini uses a 4-tier model representing the prioritization of sources, which looks like a “trust score” for each data category:
    • Tier 1: High Trust & Authority (Approx. Score: 90-100)
    • Tier 2: Vetted & Vetted-Equivalent (Approx. Score: 70-89)
    • Tier 3: Publicly Available Information (Approx. Score: 40-69)
    • Tier 4: Unverified & Contextual (Approx. Score: 0-39)

Tier 3 includes “general websites, online encyclopedias, and blog posts from unverified sources” and Tier 4 includes “Social media posts, forum discussions, and user-generated content without external verification.” In contrast, chatGPT gives similar sources a weighting of less than 5% and claims to use that information to gauge “professional sentiment (e.g., sustainability officers reacting to CSRD)” and those sources are “always cross-checked against a higher-tier source.”

  • chatGPT emphasized the importance of regulatory, legal and financial reporting information, giving them a combined weight of 55%-55%. Gemini made no mention of those, giving its highest Tier score to “Peer-reviewed scientific papers, official reports from intergovernmental bodies (e.g., IPCC, UNEP), and data from national science agencies (e.g., NASA, NOAA).” This suggests that Gemini’s responses may be more swayed by US political weaponization of climate, ESG and DEI.

One might think that chatGPT is the right choice for sustainability-related queries and research, but not necessarily. My experience testing chatGPT (starting last year, up through comparing its results to my manual efforts in researching public company reporting of sustainability value) has been less than stellar. ChatGPT’s data hierarchy may be better, but that isn’t the whole story.

It’s still vital to check and confirm the results from general AI queries.

Members can learn more about AI in sustainability/ESG here.

If you’re not already a member, sign up now and take advantage of our no-risk “100-Day Promise” – during the first 100 days as an activated member, you may cancel for any reason and receive a full refund. But it will probably pay for itself before then.

Members also save hours of research and reading time each week by using our filtered and curated library of ESG/sustainability resources covering over 100 sustainability subject areas – updated daily with practical and credible information compiled without the use of AI.

Are you a client of one of our Partners – SourceIntelligence, TRC, Kumi, Ecolumix, Elm Consulting Group International or Impakt IQ? Contact them for exclusive pricing packages for PracticalESG.

Practical Guidance for Companies, Curated for Clarity.

Back to all blogs

The Editor

Lawrence Heim has been practicing in the field of ESG management for 40 years. He began his career as a legal assistant in the Environmental Practice of Vinson & Elkins working for a partner who is nationally recognized and an adjunct professor of environmental law at the University of Texas Law School. He moved into technical environmental consulting with ENSR Consulting & Engineering at the height of environmental regulatory development, working across a range of disciplines. He was one of… View Profile