5 big risks of putting AI to work for the planet and the need for Responsible AI

5 big risks of putting AI to work for the planet and the need for Responsible AI

By Illai Gescheit, Mindset & Planet

I am an Optimist. Especially when you are building new technologies, ventures and markets, there is no other option but to be relentlessly optimistic and have conviction that the things the I and the founders I work with build will be valuable and have positive impact on society and the planet.

This week I joined a panel during the ClimateImpact Summit moderated by Leila Toplic , the Chief Communication & Trust officer of Carbonfuture . On the panel there were two amazing climate founders that are building inspiring companies and technologies. Raphael Güller is the Co-Founder of SWEEP , a software platform that allows customers to track, disclose and act on their climate and ESG data, Jacob Nathan is the Founder and CEO of Epoch Biodesign which are developing enzymatic processes and focusing on biorecycling and turn plastic waste and put them back into the manufacturing cycle as useful chemicals.

Kudos for a great event Stephen Murphy , Saoirse Kelders 柯莎莎 , Kendall Smith , Matvii Kotolyk and Henry Coutinho-Mason

Source:


One of the topics we discussed were the risks of using AI to the planet and to climate. We scratched the surface in a 40 minutes discussion about that topic and mentioned different aspects of using AI responsibly, the impact on energy demand and the need to be transparent.

The panel, and the 1:1 discussions with founders and investors that came following the panel, made me reflect on the 5 Big risks I believe we need to take into consideration while being optimistic about AI+Climate.

1. We need to be obsessive about bringing transparency and accuracy to carbon intelligence, data and accounting - With growing pressures from regulators and policy makers, businesses from large corporations to SMBs are now working hard to collect data, track and then take action about their carbon emissions and footprint. Peter Drucker is one of the management gurus that led the mindset of “If you can't measure it, you can't improve it.” If we look both on the company level but also on macro-economic level, if we want our businesses and society to reach the target of Net Zero by 2050, we must put carbon accounting and intelligence software into actions. I personally had the experience of building a startup in the carbon intelligence and offsetting 5 years ago. Back then, nobody understood what we want to do with it, and there was very limited pressure on companies to be transparent and take significant action on their carbon emissions. Today, things are very different and we see amazing traction and so many great products that help corporates and customers with their sustainability and ESG data, from SWEEP , SINAI Technologies ( Maria Carolina Fujihara ), Emitwise ( Mauro Cozzi ), IQgo ( Yariv Nir ), Sylvera ( Samuel Gill ), Watershed ( Clare Reddington ) and the list goes on and on. So you ask yourself what is the risk? This abundance of companies, creates dependency on the data sets and Artificial Intelligence models that those companies are putting in place to help large corporates track and reduce emissions. If those models are biased, or include unsupervised, and if the data is not accurate and clean, we might find ourselves believing we are on the right path to Net Zero, while we are off from that target. The risk of having the wrong measures could put us as society and as industry in the wrong mindset of indifference or on the other hand extreme pressure.

2. We need to be not only responsible builders of AI, but also responsible AI users - I wrote in the past about the impact of Emails and social media about carbon emissions, and don't get me wrong I'm a heavy user of Email, Instant Messaging and I'm a very active LinkedIn user as a Top Voice, but the impact of a single query with Large Language Models is becoming more and more significant. I totally understand the hype about AI and GenAI in particular, but beyond the fact that people are training ChatGPT for free, I believe that businesses and individuals should be responsible and think every time they use conversational AI platforms no matter if text based or even if they use heavy image based AI models with platforms like Midjourney they should do it with the mind and care that each query costs loads of money, and have environmental impact. Some estimates determine that when you ask ChatGPT a question you increase energy consumption between 24x - 236x of kWh per query vs. to a regular Google search. If you think about the average number of Google searches per day we are talking about more than 8 billion queries per day, and if Conversational AI is here to replace traditional web search, that is quite an increase.

Source:


But there's also the long term view which is more optimistic about how AI will turn human writing and exploration more efficient and less energy intense. According to the scientific article "The carbon emissions of writing and illustrating are lower for AI than for humans" authored by Bill Tomlinson , Rebecca W. Black, Donald J. Patterson and Andrew Torrance a long term view on how AI will help us reduce energy demand and emissions by replacing a lot of the human associated writings tasks vs. AI driven tasks.

The authors took as benchmark an article in The Writer Magazine that states that Mark Twain’s output, was 300 words per hour on average, and benchmarked it as an average writing speed for writers. To compute the carbon emissions from writing in general, the researcher differentiated between writers in different countries since they took the per capita emissions of individuals per country. So writing a page of 250 words in the US will emit 1400 g CO2e vs. writing the same page in India would be 180 g CO2e per page.

Therefore, if we think about Generative AI, this research assumes an average power consumption of 75 W for a typical laptop, with 27g of CO2e during the writing period of 0.8 hours. A desktop computer consumes 200 W, generating 72g CO2e for writing the same page.

If we look at the same time we let OpenAI ChatGPT to write the same page, the AI model produces 130–1500 times less CO2e per page than a human writer.

Source:


My point here is to illustrate the delicate balance between being responsible for the short term of using AI and train models responsibly with clear understanding for which use cases we should use AI and for which we should other tools that might be less energy demanding and with lower footprint. For the long run, I am still optimistic that AI will help us save energy and reduce emission but we need to make that transition carefully.

3. The ethical dilemma of technological growth while keeping our commitment to our planet - Looking at the industrial revolution, human curiosity and ambition, made us move fast, and scale industries, manufacturing and technology without understanding and taking action on climate. Humans have that urgency and ambitions to do so. I believe we are facing the same dilemma with Artificial Intelligence growth as well.

I'm not a big fan of long term plans, since life taught me that plans are a base for a change and that life, society and economy are so unpredictable that you have to stay open and agile. I planned to launch a mobility business and then COVID happened. I believe that GenAI and AGI along with our urge to explore and scale human intelligence are a punch in the face of some of our Net Zero plans, we did not expected and do not still realize the compute power and energy demand of those systems. The big risk here that we will compromise on our Net Zero targets for the sake of scaling Artificial Intelligence. We might have a risk of companies going back to fossil fuels in combination with green energy to meet the demand of AI.

I'm a big believer that the growth in AI is one of the biggest catalysts that will push the physical world and the energy industry to find creative, clean and affordable ways to meet that demand. I wrote an article about that on Cipher News a while ago titled "Why we must invest in innovative climate software and AI" https://www.dhirubhai.net/pulse/why-we-must-invest-innovative-climate-software-ai-cipherclimate/.

This is why we see the tech giants like 微软 ( Melanie Nakagawa ) and 谷歌 ( Kate Brandt ) that are the first to adpot and invest in renewable energy, energy storage and carbon capture. Spending some time building businesses in the cloud computing businesses and also working with startups and VCs at Amazon Web Services (AWS) what big tech is seeing in AI growth is an immense potential for growth in their cloud businesses - incredible compute power, and storage demand to store large data sets and train models.

At the heart of this are data centers. According to an article by Sophie McLean on Earth.Org (https://earth.org/environmental-impact-chatgpt/), it's not only the energy demand of using GPT models, but also water consumption. McLean mentions a study that reports that Microsoft used approximately 700,000 litres of freshwater while training its GPT-3 model in its data centres – same amount of water needed to produce 370 BMW cars or 320 Tesla vehicles. Those figures are growing as we feed the models with more data, users and essentially GPT-5 is already in development, and GPT-4 is in use. The water are used to cool the data centers that because of the large energy amounts creates incredible heat in the facilities.

4. Hallucinations and the planet - The last risk is quite science fiction, but a real issue in the AI community that we discuss. We all know the discussions and alerts by Elon Musk about AI going wrong, and we all watched movies where robots and computers take over the world. Hallucination is exactly that - it is a situation where AI is 'out of control', or more accurately out of human control.

If you follow Dan Ariely you acknowledge the fact that although we might expect humans to be rational, we are very irrational in our behaviours and actions. Based on the fact that Artificial Intelligence and especially when we talk about AGI or Artificial General Intelligence, we mimic the human brain with models like Neural Networks. Like with human, there are layers that data scientist not necessarily understand, and when hallucination could take place.

Imagine a state where our AI goes into a delusional state of 'mind' and decides that the world is better with Fossil Fuels. It starts making the wrong decisions on corporate plans for action, gives false insights, and starts influencing other systems in the network to follow those guideline.

This is why we must understand that although it sounds a bit like science fiction, this is a real use case we should address. One of the key aspects of using AI to help our planet is to make sure that we understand the importance of human key experts, and why we need constantly to make sure we help AI to work for us and the planet and remove any biases, and to constantly be critical and responsible when we deploy it within our systems.

5. We must go beyond the hype and use AI to solve real problems with real climate impact - At the end of the panel, Leila Toplic asked us if there is one message we wish the audience to toke from the panel. My answer was short and simple to founders "Don't build AI companies, build companies that solve real problems with real climate impact".

Building a startup requires immense mental and physical resources from founders, investors and from customers that engage with you as you scale. I see too many founders that come to me and say "we want to build an AI company", however, they do not have clarity and validation about the problem they are solving. The big risk is that we put all those efforts, energy and capital towards the hype, rather than building companies that are meaningful and impactful.

I really liked how Raphael Güller mentioned it in his post as well "??? AI is a tool, not the solution. Founders: Solve problems, don't build an AI company for the sake of it."

I will end with my view that the future of our planet and AI is optimistic, but as humans and society we have the ability to help it scale for good, and the key is responsible leadership - being aware, transparent and take prompt action.


This is a hot topic and I'm sure you have something to say about it so feel free to share your thoughts, ask questions in the comments.

If you like this article and wish to engage more with my content and work please follow me Illai Gescheit.


#AI #climatetech #innovation #technology #climatechange #risk #sustainability #climateaction #startups #venturecapital #investment #LLM #GenAI #MachineLearning #LinkedinTopVoice #TopVoice #GreenEconomy #Future


How do you navigate these risks while ensuring AI positively impacts climate action, Illai Gescheit?

回复
Pete Grett

GEN AI Evangelist | #TechSherpa | #LiftOthersUp

6 个月

Thoughtful insights on AI's climate potential and risks. Moving forward responsibly is key. Illai Gescheit

Ishu Bansal

Optimizing logistics and transportation with a passion for excellence | Building Ecosystem for Logistics Industry | Analytics-driven Logistics

6 个月

Could you share any examples of how AI has been successfully used to solve real problems with positive climate impact? #AI #climatetech.

Pranav Mehta

Simplifying Data Science for You | 7K+ Community | Director @ American Express | IIM Indore

6 个月

Your innovative insights on the risks of putting AI to work for the planet are truly eye-opening and inspiring, Illai Gescheit. Your passion for building technologies with a positive impact is commendable and much needed in today's world. Keep up the fantastic work!

Djoann Fal

Climate tech VC & Community Builder ? Lazada Alumni ? Exited Founder ? Grew last company to ARR $10m+ ? Author covered in The Economist, New York Times, Tatler ? Founding CEO of GetLinks, funded by Alibaba, 500 startups.

6 个月

Good one Illai Gescheit !

要查看或添加评论,请登录

Illai Gescheit的更多文章

社区洞察

其他会员也浏览了