Setting Up PyTorch on Windows and Training Your First Model: A Complete Step-by-Step Guide

Setting Up PyTorch on Windows and Training Your First Model: A Complete Step-by-Step Guide

In this tutorial, we'll walk you through setting up Python and PyTorch on Windows and training your first model.

Setup Python and PyTorch

  • Install Python on Windows and choose between 3.6 and 3.10, as PyTorch officially supports only these versions.
  • Don't forget to add Python to the environment variables.
  • Verify the installation using the command

python --version        

  • Install PyTorch using these commands in Windows

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118        


PyTorch - Explanation

  • It is an open-source machine learning library, developed by FAIR - Facebook AI's Research lab.
  • Initially, it was used for Deep learning and tensor computations.
  • It gives high performance in training and deploying models.
  • High flexibility too.
  • Pytorch has three main components - torch, torchvision, and torchaudio.
  • Torch: It's a core torch library, and can handle the multi-dimensional tensors and perform various mathematical operations on them.
  • Torchvision: Extension of PyTorch which contains Computer Vision Tools, includes pre-trained models, images, and datasets.
  • Torchaudio: To Process the audio data, can provide the tools to transform and manipulate the audio signals.
  • To verify the PyTorch installation, run this command.

python -c "import torch; print(torch.__version__)"        

  • The setup is done, let's dive into the code.


Project setup and development

  • Create the desired project directory and create a file with the extension filename.py
  • import components

import torch
import torch.nn as nn
import torch.optim as optim        

Data preparation for heart disease prediction.

  • [63, 145, 233, 150 - It's defined the features like age, Blood pressure, Cholesterol, and Heart Rate
  • And labeled as Disease and Non-Disease.
  • 1 as yes, 0 As No

 X = torch.tensor([
    [63, 145, 233, 150],  # Example 1 
    [37, 130, 250, 187],  # Example 2
    [41, 130, 204, 172],  # Example 3
    [56, 140, 236, 178],  # Example 4
    [57, 120, 354, 163]   # Example 5
], dtype=torch.float32)

y = torch.tensor([[1], [0], [0], [1], [1]], dtype=torch.float32)        

Simple Neural Network

  • Here we have to define the architecture of the model.
  • we can get the output of the model by passing the input through the linear layer and processing the output of the linear layer in the connected layer.
  • forward: It'll pass the data through the network
  • relu: Allow the model to learn more about complex patterns.
  • sigmoid: It'll do the binary classification to get the binary output (0 or 1)

class HeartDiseaseNN(nn.Module):
    def __init__(self):
        super(HeartDiseaseNN, self).__init__()
        self.fc1 = nn.Linear(4, 16)  
        self.fc2 = nn.Linear(16, 1)   
        self.sigmoid = nn.Sigmoid()   

    def forward(self, x):
        x = torch.relu(self.fc1(x))   
        x = self.fc2(x)  
        return self.sigmoid(x)   

model = HeartDiseaseNN()        

Loss function and Optimizer

  • BCELoss: Binary Cross Entropy Loss, To measure the error between the predicted probabilities and the actual binary labels (0 or 1).
  • optim.SGD: Stochastic Gradient Descent (SGD) optimizer, which is used to update the model's parameters during training based on the computed gradients.

criterion = nn.BCELoss()   
optimizer = optim.SGD(model.parameters(), lr=0.01)        

Train and test the model

  • model.train(): To define the model in training mode.
  • outputs = model(X): input data x is passed through the model to generate the output.
  • loss: The loss is computed using the criterion?which calculates how far the predicted outputs (outputs) are from the true labels (y).
  • optimizer.zero_grad(): Before performing backpropagation, the previous gradients are cleared using zero_grad(). This is necessary because, by default, gradients accumulate in PyTorch.
  • loss.backward(): The model's weights are adjusted based on the loss by calculating the derivative of the loss concerning each parameter.
  • optimizer.step(): The optimizer updates the model’s parameters using the computed gradients.
  • Every 10 epochs, the current loss is printed to track how well the model is learning.

for epoch in range(100):
    model.train()
    outputs = model(X)
    loss = criterion(outputs, y)
    optimizer.zero_grad()  
    loss.backward()   
    optimizer.step()  
    if (epoch+1) % 10 == 0:
        print(f"Epoch [{epoch+1}/100], Loss: {loss.item():.4f}")        

Test the model:

  • The test_data tensor holds the input features for a new patient.
  • The model predicts the probability of the disease based on the trained weights.
  • You can threshold the predicted probability to classify the patient as having the disease (1) or not (0).

test_data = torch.tensor([[50, 140, 220, 160]], dtype=torch.float32)  # Age, BP, Chol, Max HR
predicted_prob = model(test_data)        

  • To predict the probability into a binary decision: either 1 (disease present) or 0 (no disease) based on the threshold of 0.5.

predicted_class = 1 if predicted_prob.item() > 0.5 else 0
print(f"Predicted Class (1 = Disease, 0 = No Disease): {predicted_class}")        

  • Run the command in the directory

python filename.py        

Output:

Output

Output explanation:

  • The training progress is printed every 10 epochs, showing the loss at each step. The loss decreases over time as the model learns.
  • The final output is the predicted class for a new patient with certain features, based on the trained model.



要查看或添加评论,请登录

Nanthakumar D的更多文章

社区洞察

其他会员也浏览了