Unlocking ML Power on Your MacBook Pro: Leveraging the MPS API

Unlocking ML Power on Your MacBook Pro: Leveraging the MPS API

I used to envy Windows users for their effortless GPU acceleration with CUDA. But then I discovered the secret to unlocking blazing-fast ML performance on my MacBook Pro: Metal Performance Shaders (MPS).

MPS is Apple's answer to CUDA, allowing you to harness the power of your Mac's GPU for accelerated machine learning tasks.


Here's the magic:

  • Define your device:

#%% Define the device
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')        

  • Assign the device to your model:

 model = MultiLabelNet(input_size=input_size, 
                          hidden_size=hidden_size, 
                          output_size=output_size)
model = model.to(device=device)        

  • Send data to the device during training:

for i, (X, y) in enumerate(train_DataLoader):
            # Declare Device for trainloader
            X = X.to(device=device)
            y = y.to(device=device)        

That's it! You're now training your models with the speed and efficiency of MPS.


  • Bonus: Monitor memory usage with:

print("Checking for MPS (Metal Performance Shaders) Metal support")
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
print('Using device:', device)
print()

if device.type == 'mps':
    print('Memory Usage:')
    print('Allocated:', round(torch.mps.current_allocated_memory()/1024**3,1), 'GB')
    print('Driver Memory:   ', round(torch.mps.driver_allocated_memory()/1024**3,1), 'GB')
    print('Cached:   ', round(torch.mps.recommended_max_memory()/1024**3,1), 'GB')
else:
    print(f"Device: XXX{device}XXX")        

For better understanding of MPS, refer the official documentation: https://pytorch.org/docs/stable/notes/mps.html


#MacBookPro #MachineLearning #DeepLearning #MPS #Metal #PyTorch #AI #DataScience"

要查看或添加评论,请登录

Mohit Kumar Dubey的更多文章

社区洞察

其他会员也浏览了