Large language models have been around for years but have recently acquired celebrity status with the launch of ChatGPT. We look at the use of artificial intelligence in impact investing from three different perspectives.
- AI is already at work in many aspects of impact investing.
- However, it is not a ‘plug and play’ solution. “AI is in essence a neutral tool. In the right hands it can deliver powerful outcomes, in the wrong hands it can reinforce inequalities.”
- Hallucination – AI’s worrying ability to simply make up answers to questions – is an issue at present, but hopefully can be overcome.
- The long term potential is significant. Both for impact investors as they seek to combat greenwashing, and for the companies they invest in as they use it to improve their impact.
The think-tank’s perspective
The Planet Tracker is a non-profit financial think tank with a mission to “create significant and irreversible transformation of global financial activities by 2030″. It used one subset of AI, natural language processing (NLP), to create its report ‘Exposing Plastic Risk.’ It analysed 8,245 documents and transcripts to interpret how plastic-related companies view risk in their industry.
John Willis, Planet Tracker’s director of research, cautions: “Our experience… is that considerable work needs to be undertaken to ‘train’ the algorithm to interpret the text correctly. Manual checking of NLP outputs was needed to refine the accuracy of the algorithm. We would caution against anyone thinking that this is a plug-and-play solution.”
He says he would be “surprised if asset managers were not testing AI”, bust stresses “this will need to be rigorous as regulators are unlikely to provide exceptions if AI generated data is misleading”.
Nevertheless, he does believe that “impact investors may be able to make comparisons between different investment strategies more easily or identify best practices”. He adds: “AI more generally could be used for other reasons such as testing datasets to see if they can provide insights into fund outperformance. This could involve deep learning and the use of multi-layered neural networks.”
In one example, the idea is that “an AI avatar can quickly access vast amounts of research and data – faster than a human – and provide an accurate answer”. Willis thinks there will soon be “a few leaders [amongst investment managers] which will jealously guard their AI edge”.
He adds: “In terms of greenwashing, well-tuned NLP algorithms will allow investors and civil society to cross-check documents for inconsistencies. For example, a corporate website may suggest that it is a leader in (say) sustainability, but an examination of regulatory documents reveals little is being progressed on this front [by that company].”
The ‘tech bros’ view
This is exactly what Permutable AI seeks to do. CEO Wilson Chan tells us: “At the moment investors don’t have a full grasp of a company’s ecosystem and whether what they are actually doing is the same as what they say they are doing. We are aiming for a much more robust process which will eliminate green washing.”
What’s under the hood? Permutable AI’s young data scientist Adam Kirchel explains: “In our case we are using large language models, and most particularly an encoder called Bert to enable a much more detailed classification process and significantly improved accuracy when it comes to measuring corporate commitments to sustainability and social and environmental impact.”
There have been well publicised issues around ‘hallucination’ from some large language models – AI’s worrying ability to simply make up answers to questions. “Our intention is to eliminate this as a risk by ensuring that all of our data is backed up by source links.”
Permutable’s data models use approximately 27,000 suppliers world-wide, and it is branching out into social media analysis. It cross references data with reputable data providers, such as the Science Based Targets initiative, and the Green Cross, a charitable organisation seeking to eliminate human slavery globally.
Permutable has been working with the Global Financial Innovation Network (GFIN) with the specific task of eliminating greenwashing, and with the UK government to improve analysis of carbon emissions in supply chains. It is estimated that currently 90% of supply chains are not being properly measured.
Chan says “the most important challenge is to get all of the regulators and industry bodies on the same page. We need to have a single standardised global framework matched with AI if we are to seriously eliminate greenwashing”.
The fund manager’s angle
Daniel Stacey, head of external affairs at impact investment firm LeapFrog, explains: “We do not specifically use AI to prevent impact washing.” However, “in businesses with complex supply chains where it can be hard to track environmental effects, AI can be an effective tool to sense check company data against external sources”.
As well as Permutable, the Agora plaftorm created by Palantir is an example.
Stacey adds: “We extract data from all our portfolio companies to monitor and track impact, but the complexity of the data hasn’t so far called for the use of AI tools. This will change as the volume and frequency of that data increases, and we are already exploring ways to utilise these tools within our proprietary data asset.”
More importantly, Stacey notes how companies LeapFrog invests in can deploy AI “to pursue impact and improve the cost, relevance and convenience of their products. AI is in essence a neutral tool. In the right hands it can deliver powerful outcomes, in the wrong hands it can reinforce inequalities”.
Stacey argues the potential negative side effects of AI deployment for consumers are reasonably well understood – from biased algorithms, to dynamic or surge pricing models that can make products more expensive rather than less. “It’s important for impact investors to be aware of these risks and monitor the outcomes of AI deployments to ensure that positive customer experiences remain the top priority.”
Less well understood are the positive potential uses. “For instance, some of our companies utilise AI to help underwrite loans or insurance where applicants have low or no documentation, by comparing their survey responses to pools of similar customer data and bureau scores. Other companies use natural language processing to provide automated customer service via WhatsApp for low-income customers, who haven’t been able to afford products with customer service in the past.”
For Stacey the future of AI in impact must be positive. “We need to seize the opportunity to lower costs and improve the relevance and convenience of our products, to better serve low-income consumers who are increasingly part of the digital world.”
But alongside the positive story of AI delivering growth and scale, he adds, “there also needs to be guardrails established for the deployment of AI, particularly across vulnerable populations. Many major technology companies have already outlined their own ethical frameworks for AI deployment, but I think impact investors hold themselves to a higher bar and will necessarily evolve their own frameworks”.