Image Classification with Tensorflow and AWS Sage Maker based Jupyter Notebook

Image Classification with Tensorflow and AWS Sage Maker based Jupyter Notebook

Have done number of Image Classifications in Google Cloud based Tensorflow in the past and it is always been a time consuming exercise to mark the images and have fine balance between unfit data to overfit data. This time I have done similar exploration with AWS SageMaker and shared insights in this newsletter this week.

Introduction

Integrated Notebook of SageMaker with Keras to import the libraries to gather training images and send to TensorFlow-so that it produces insights back to notebook:

As a first option, I have reused a training image data from Pluralsight/ACloudGuru to understand how this Lego images set has been classified in a notebook:

Now, I could review the pynb and npy file sets from the notebook which has been pre-built:

Then I moved on to change the kernal to right version to produce the execution with right python libraries:

Now, executive Import commands enabled importing keras from tensorflow, plt from matplotlib:

Then I made two sets of data (one for training and one for testing) to load into the kernal:

Now Lego Images are being loaded and printed in kernal with different pixel variations listed part of Out[3]:

Then displaying the Lego essential for us to understand what the training image looks like? This is done via series of plt commands in python:

Now I need to have a label set for the lego used across these data sets in order to differentiate right set of data preprocessing:

Then next step is to write human readable names for each class of lego codes -so it is helpful in identifying right lego being picked in every attempt of analysis:

Now printing the name of the lego displayed earlier:

Then bunch of commands used in listing the 20 different lego figures used within the training table set with different labels:

Tensorflow Model Training

Now we are in critical phase of machine learning model training to use these classified labelled data into machine learning (kera services which are integrated to notebook):

Then it produce the history of images which are used in training the machine learning model:

Model Accuracy and Model Loss

Now list of python commands used to produce the image of Model Accuracy and Model Loss to understand how effect these images are w.r.t training the model!

Then let us look at the overall test accuracy through evaluate command:

Single Prediction Exercise

Under Single Prediction, running the first code cell to pick a random image in the test set. Similarly executed the next code cell to transform the image into a collection of one image and executed the third code cell to pass the image into the predict method.




Highest-probability Prediction Result

Using argmax to find highest probability in the prediction of results within predictions_single:



Batch Predictions

Similar to previous line of code, predicting labels of the images are possible with this batch prediction code:

Now these results are summarised in a bar chart with the help of commands below:

Some standards steps of Image Classification to summarise from this newsletter,

Source: ChatGPT for this part of the newsletter from here onwards:)

1. Set Up the Environment

  • Create a Notebook Instance: Launch an AWS SageMaker notebook instance with TensorFlow installed or use a custom kernel that includes TensorFlow.
  • Attach an IAM Role: Ensure the notebook instance has an IAM role with permissions to access necessary services like S3.


2. Prepare the Dataset

  • Upload Images: Organize your image dataset into folders (one folder per class) and upload it to an S3 bucket.
  • Download Dataset: Use the notebook to fetch the dataset from S3 or any public repository.
  • Preprocess Images: Resize images to a consistent size (e.g., 224x224 pixels for models like ResNet). Normalize pixel values to fall within a range of [0, 1] or [-1, 1].


3. Define the Model

  • Choose a TensorFlow Model: Use pre-trained models (e.g., ResNet, MobileNet, or EfficientNet) from TensorFlow Hub or define a custom CNN.
  • Transfer Learning: Fine-tune a pre-trained model by replacing the top layer with your classification head (softmax layer) to match the number of classes in your dataset.


4. Train the Model

  • Load Data: Use TensorFlow’s ImageDataGenerator or tf.data to create a pipeline for loading, augmenting, and batching the dataset.
  • Split Data: Divide the dataset into training, validation, and test sets.
  • Set Hyperparameters: Define batch size, learning rate, number of epochs, and optimizer (e.g., Adam).
  • Train: Use the model.fit function to train the model on the training set and validate it on the validation set.


5. Evaluate the Model

  • Test Performance: Evaluate the model on the test set to measure accuracy, precision, recall, or other metrics.
  • Visualize Results: Plot the training/validation accuracy and loss curves to analyze performance over epochs.


6. Save and Deploy the Model

  • Save the Model: Export the trained model to a format like SavedModel or HDF5 and upload it to S3.
  • Deploy with SageMaker Endpoint: Create a SageMaker TensorFlow model using the saved model artifact. Deploy the model to a SageMaker endpoint for real-time inference.


7. Perform Inference

  • Test Predictions: Send sample images to the deployed endpoint using the SageMaker SDK and receive predictions.
  • Batch Predictions: Use SageMaker batch transform for processing large datasets at once.


Why Use AWS SageMaker for TensorFlow Image Classification

  1. Scalability: Train models on powerful GPU instances, such as ml.p3.2xlarge, for large datasets.
  2. Integration with AWS Services: Seamless connection with S3, SageMaker endpoints, and other AWS tools.
  3. Ease of Deployment: Quickly deploy models to scalable, managed endpoints without manual setup.
  4. Cost-Effectiveness: Use spot instances for training or scale resources up or down as needed.
  5. Prebuilt Environments: SageMaker provides preconfigured environments with TensorFlow and common libraries, saving setup time.

??Watch the steps in this youtube video Link

??Please feel free to share your views in the comments section on any simplified steps in creating image classification using tensorflow in any better or alternative approaches.


?Follow me on LinkedIn: Link

Like this article? Subscribe to Engineering Leadership , Digital Accessibility, Digital Payments Hub and Motivation newsletters to enjoy reading useful articles. Press SHARE and REPOST button to help sharing the content with your network.

#LinkedInNewsUK #FinanceLeadership


要查看或添加评论,请登录

NARAYANAN PALANI ???????的更多文章