'Spatial Hallucinations': What can go wrong when open satellite data is deployed for ESG scoring?

'Spatial Hallucinations': What can go wrong when open satellite data is deployed for ESG scoring?

The risk of hallucinations is a very well known problem in generative AI space, specifically associated with the Large Language Models. In this blog post I'll demonstrate how it is also emerging as a risk in the Environmental, Social, and Governance (ESG) area of finance, particularly across methodological constructs with use of open satellite data. This data, while invaluable for monitoring environmental impacts and compliance, can be easily misinterpreted and lead to 'hallucinatory' misrepresentations, thus potentially provoking a new wave of 'greenwashing' frauds.

要查看或添加评论,请登录

Nataliya Tkachenko, PhD的更多文章

社区洞察