Why Should Sustainability Be a First-Class Consideration for AI Systems?
Green Software Foundation
Creating a trusted ecosystem of people, standards, tooling and best practices for building green software
Should sustainability be a first-class consideration for AI systems?
Yes, because AI systems have environmental and societal implications. What can you do to make green AI a reality?
Data scientists, machine learning engineers, and other technical stakeholders involved in the AI lifecycle are very familiar with business and functional considerations to guide the design, development, and deployment of AI systems. But, should sustainability be made an equal first-class citizen in that list of considerations??
Yes! Particularly because it has implications towards both the environment and societal implications of AI systems.?
Incorporating sustainability in AI can allow us to (1) achieve social justice when we utilize this approach, and (2) especially so when these systems operate in an inherently socio-technical context. Indeed, a harmonized approach accounting for both societal and environmental considerations in the design, development, and deployment of AI systems can lead us to gains that support the triple bottom line: profits, people, and planet.
Challenges with the current paradigm?
The current paradigm of AI systems that are heavily skewed towards having larger model sizes in the pursuit of state-of-the-art (SOTA) performance necessitate exploitative data practices, centralization of power and homogeneity, and intertwined societal and environmental impacts.?
Exploitative data practices
As we build larger AI models, they tend to have higher “capacity”. That is, they need larger datasets to prevent overfitting and adequately capture the distribution of the input data to have good generalization capabilities. When trained on publicly available data—with caveats of consent in terms of how that data was obtained—this approach might work well; but in building such systems on top of private data, it means that we need to engage in deeply exploitative data practices, trying to gather as much data as possible from the users of the system. This can also take the form of nudging them to do so through dark design patterns making the technology addictive: endless newsfeeds, autoplay videos, intrusive recommendations, automatic scraping of contactbooks, etc.?
Hinders democratization of AI
Such large systems also hinder democratization of AI; they require huge computational infrastructure to run effectively which costs a lot of money—not to mention has significant environmental impacts!—that only the most affluent corporations, research labs, and universities are able to afford. This leads to centralization of power and homogeneity in the crop of solutions and ways of conducting research and creating products and services that might serve the needs of the few while ignoring those of the many.?
Centered on business and functional requirements ignoring environmental costs
Finally, such a paradigm that is strongly centered on business and functional requirements over any other considerations encourages an ecosystem of manufacturers who can blindly pursue performance gains in luring consumers towards their hardware and software solutions in AI without paying any heed to the environmental costs of such systems. In a recently released performance benchmark from MLCommons, it was discovered that the number of submissions reporting energy consumption had dwindled by about 50% compared to the previous iteration of the benchmark while performance was emphasized even more by the manufacturers participating in the benchmark.??
What is sustainable AI?
In a nutshell, sustainable AI refers to cognizance and efforts invested in making sustainability an equal first-class citizen with business and functional requirements in the design, development, and deployment of AI systems.?
It should take a lifecycle approach to accounting for the carbon impacts of such systems from the hardware running the systems to the software and back up to the hardware that consumes the applications on the users’ side.?
This includes:
What can you do next?
There are some immediate next steps that you can take to make sustainable AI systems a reality:
If you find other ways to reduce the impact of AI systems on the environment, please don’t hesitate to reach out to us at the Green Software Foundation. Consider getting involved in our work on constructing the Software Carbon Intensity Standard (SCI). It creates a standardized and interoperable way to measure the impact of software systems empowering both developers and consumers to make informed greener choices.?
This article is based on a research article published by Abhishek Gupta for The Gradient titled “The Imperative for Sustainable AI Systems”. Follow the link to read a more extended version of this article.?
Article By:?Abhishek Gupta
Originally published in the Green Software Foundation blog.
Founder and Principal Researcher, Montreal AI Ethics Institute | Director, Responsible AI @ BCG | Helping organizations build and scale Responsible AI programs | Research on Augmented Collective Intelligence (ACI)
3 年Delighted to have the opportunity to contribute!