Sustainable ML - Monitor Power Consumption
https://www.forbes.com/sites/esade/2023/03/17/we-need-to-make-machine-learning-sustainable-heres-how/?sh=4a1a4f816d25

Sustainable ML - Monitor Power Consumption

Training models will also consider the power consumption of the hardware. The following paper compares the most common tools:

Estimate carbon footprint when training deep learning models

The tool that will be presented in this article is code carbon. Codecarbon is a Python package that estimates the hardware electricity power consumption (GPU + CPU + RAM) and applies the carbon intensity of the region where the computing is done. Mode information for the tool and how it works can be found here:

Codecarbon

CodeCarbon

To install code carbon can use one of the following commands:

# From PyPI repository
pip install codecarbon

# From Conda repository
conda install -c conda-forge codecarbon        

and then will be required to create an experiment ID to be used for the power consumption tracking:

#To get an experiment_id enter:
! codecarbon init

#You can now store it in a .codecarbon.config at the root of your project
[codecarbon]
log_level = DEBUG
save_to_api = True
experiment_id = <the experiment_id you get with init>        

There are two options to monitor the power consumption. The first is to use the command prompt which will track the emissions independently from the code:

codecarbon monitor         

or use the following in the Python code to track specific functions:

from codecarbon import track_emissions
@track_emissions()
def your_function_to_track():
  # your code        

Codecarbon has its own dashboard, which can be seen here: Codecarbon Dashboard.

Comet-ML

However, there is another service called Comet-ML that can be used to track, compare, and optimise the models across the whole ML lifecycle.

To install comet-ml, you can use the following command:

pip install comet-ml        

Also, it will require creating an account on Comet's website and generating an API key. By using the API, you can create a code carbon experiment as the following commands:

from comet_ml import Experiment 
experiment = Experiment(api_key="your API key")        

When running, the above code will start code carbon and generate a URL to access Comet. The following shows the code carbon output:

And the power consumption as reported by code carbon:

The comet URL will appear as an output in the Python environment and will look like this:

COMET INFO: Experiment is live on comet.com https://www.comet.com/<user ID>/general/08e1439a03d4435c8c0b9212a6e931d7        

Also, the Comet portal has GPU, CPU, and Memory metrics. This information can help to understand the impact that the ML training or lifecycle has to the hardware:

GPU memory usage and overall utilisation

GPU power usage and temperature

Memory usage and CPU utilisation

Network Usage and Disk Utilisation

Also, the code carbon output can be seen in the comet portal:

Codecarbon has some limitations regarding hardware. If it can't monitor CPU or RAM, it will create a fixed power consumption value. This is a generic default value. However, the GPU power consumption seems to be more accurate, as long as you are using an NVIDIA card.


#powerconsumption #machinelearning #codecarbon #cometml #sustainability
















要查看或添加评论,请登录

Andrew Antonopoulos的更多文章

  • TensorFlow Serving API & gRPC

    TensorFlow Serving API & gRPC

    To serve models for production applications, one can use REST API or gRPC. gRPC is a high-performance, binary, and…

  • Blockchain & Web3 Technology

    Blockchain & Web3 Technology

    Blockchain is a technology that securely stores transactional information by linking blocks together in a specific…

  • NVIDIA Mixed Precision - Loss & Accuracy - Part 2

    NVIDIA Mixed Precision - Loss & Accuracy - Part 2

    Part 1 explained how Nvidia's mixed precision can help reduce power consumption. However, we also need to consider…

  • NVIDIA Mixed Precision & Power Consumption - Part 1

    NVIDIA Mixed Precision & Power Consumption - Part 1

    Deep Learning has enabled progress in many different applications and can be used for developing models for…

  • Nvidia GPU & TensorFlow for ML in Ubuntu 24.04 LTS

    Nvidia GPU & TensorFlow for ML in Ubuntu 24.04 LTS

    Tensorflow announced that it would stop supporting GPUs for Windows. The latest support version was 2.

    5 条评论
  • FreeBSD 13 & TCP BBR Congestion Control

    FreeBSD 13 & TCP BBR Congestion Control

    Finally TCP BBR is available for FreeBSD new release 13.x.

    2 条评论
  • Kubernetes - Open Source Tools

    Kubernetes - Open Source Tools

    Kubernetes (also known as k8s or “kube”) is a very popular container orchestration platform that automates many of the…

  • Cache-Control Headers

    Cache-Control Headers

    The performance of content that is available via web sites and applications can be significantly improved by reusing…

  • CDN Cache and Machine Learning

    CDN Cache and Machine Learning

    The majority of the Internet’s content is delivered by global caching networks, also known as Content Delivery Networks…

  • OTT & Mobile Battle in Africa

    OTT & Mobile Battle in Africa

    OTT and specially SVOD is growing in Africa. Recently big OTT providers such as Netflix, muvi, Showmax, iFlix, MTN and…

社区洞察

其他会员也浏览了