Unlocking ML Power on Your MacBook Pro: Leveraging the MPS API
I used to envy Windows users for their effortless GPU acceleration with CUDA. But then I discovered the secret to unlocking blazing-fast ML performance on my MacBook Pro: Metal Performance Shaders (MPS).
MPS is Apple's answer to CUDA, allowing you to harness the power of your Mac's GPU for accelerated machine learning tasks.
Here's the magic:
#%% Define the device
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
model = MultiLabelNet(input_size=input_size,
hidden_size=hidden_size,
output_size=output_size)
model = model.to(device=device)
for i, (X, y) in enumerate(train_DataLoader):
# Declare Device for trainloader
X = X.to(device=device)
y = y.to(device=device)
That's it! You're now training your models with the speed and efficiency of MPS.
print("Checking for MPS (Metal Performance Shaders) Metal support")
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
print('Using device:', device)
print()
if device.type == 'mps':
print('Memory Usage:')
print('Allocated:', round(torch.mps.current_allocated_memory()/1024**3,1), 'GB')
print('Driver Memory: ', round(torch.mps.driver_allocated_memory()/1024**3,1), 'GB')
print('Cached: ', round(torch.mps.recommended_max_memory()/1024**3,1), 'GB')
else:
print(f"Device: XXX{device}XXX")
For better understanding of MPS, refer the official documentation: https://pytorch.org/docs/stable/notes/mps.html
#MacBookPro #MachineLearning #DeepLearning #MPS #Metal #PyTorch #AI #DataScience"