Instance Segmentation Using Mask-RCNN

Instance Segmentation Using Mask-RCNN

This article is about Instance segmentation and will be using Mask-RCNN to train my machine learning model. I will use supervise.ly web platform to build my deep learning model which provide a single environment to do my task.

The instance segmentation combines object detection, where the goal is to classify individual objects and localize them using a bounding box, and semantic segmentation, where the goal is to classify each pixel into the given classes. In instance segmentation, we care about detection and segmentation of the instances of objects separately.

No alt text provided for this image

We have two types of Segmentation:-

Semantic Segmentation & Instance Segmentation.

In Semantic Segmentation items of the same type are initialized with the same colour whereas In Instance Segmentation items of the same type are coloured differently.

No alt text provided for this image

The above image shows the difference between Semantic Segmentation & Instance Segmentation. In semantic all people are coloured with the same colour and in Instance, each person is coloured with a unique colour.

About Mask-RCNN

Mask RCNN is a deep neural network aimed to solve the instance segmentation problem in machine learning or computer vision. In other words, it can separate different objects in an image or a video. You give it an image, it gives you the object bounding boxes, classes and masks.

Let's start using supervisely first of all we need to login and add a team to start things.

No alt text provided for this image

Give the relevant title, description and save it.

No alt text provided for this image

Our team is created and now we will start working inside it.

First of all, we need to upload to the dataset which will be used to train the model. We have two option either we upload our own data or in supervisely, we have dataset library that can be used to get the data. I will be using the dataset library for the image data.

No alt text provided for this image

Select any of the datasets and click on clone project to add that to your project.

No alt text provided for this image

Now we can see the dataset is present in the projects.

No alt text provided for this image

Now we need to annotate the image using the annotation tool provided by supervisely. Annotation is done to tell the model which part of the dataset is important. Firstly we have to train the model then it will give the segmented output.

To annotate the image I used the bitmap tool.

No alt text provided for this image

The same way we need to annotate all the image. The images that we have is very less so we need to use augmentation but in supervisely we use DTL for the same.

We have to provide a code in the DTL for the augmentation to start. The code that I used in DTL is:

[
  {
    "dst": "$data",
    "src": [
      "Console Dataset/*"
    ],
    "action": "data",
    "settings": {
      "classes_mapping": "default"
    }
  },
  {
    "dst": "$flip_vert",
    "src": [
      "$data"
    ],
    "action": "flip",
    "settings": {
      "axis": "vertical"
    }
  },
  {
    "dst": "Console Dataset_Aug",
    "src": [
      "$data",
      "$resized_result",
      "$resized_result2",
      "$noise_result",
      "$flip_vert"
    ],
    "action": "supervisely",
    "settings": {}
  },
  {
    "action": "resize",
    "src": [
      "$data"
    ],
    "dst": "$resized_result",
    "settings": {
      "width": 800,
      "height": -1,
      "aspect_ratio": {
        "keep": true
      }
    }
  },
  {
    "action": "noise",
    "src": [
      "$data"
    ],
    "dst": "$noise_result",
    "settings": {
      "mean": 10,
      "std": 60
    }
  },
  {
    "action": "resize",
    "src": [
      "$data"
    ],
    "dst": "$resized_result2",
    "settings": {
      "width": 300,
      "height": -1,
      "aspect_ratio": {
        "keep": true
      }
    }
  }
]
No alt text provided for this image

Now we will select the neural network model to train our data.

No alt text provided for this image

Supervisely provides us with all the workflow that is required in doing the project one thing that we need to bring here that supervisely don't provide us is Hardware resources(RAM + CPU + OS).

So we have to add the resource to run our neural networking code.

On the left side of supervisely, we have cluster option click on that.

By default we have one cluster Supervisely Agent this is the master cluster that will control all other clusters.

No alt text provided for this image

Now we need to add a slave cluster that will provide the resource(RAM + CPU + OS).

Adding cluster to Supervisely:

Task like training neural network or running DTL are deployed on computational machines using agents.

To add our own Cluster, just navigate to Cluster and click on Add. After that we need to add an agent or machine.

No alt text provided for this image

So we need a machine that is run with Linux OS, Docker, NVIDIA-Docker and GPU support. For that purpose, we will launch a machine on AWS.

We will select the AMI(Amazon Machine Image) that fulfil all our requirement for this time I have used Deep Learning AMI (Ubuntu 18.04) Version 30.2.

No alt text provided for this image

After selecting AMI ( Image ) now we will select the GPU instances type. I have selected g4dn.4xlarge.

No alt text provided for this image

After doing all the remaining configuration we will launch the EC2 Instance and run the code provided by supervisely.

No alt text provided for this image

This code will install docker and inside docker install all the required package for running our python code.

No alt text provided for this image

After a successful installation, we will see that our cluster is added to supervisely and it is in running state.

Now it time to train our model we will click on the train button in Neural networks.

No alt text provided for this image

We will be brought into the Run Plugin here we have to provide the train data inside input project and give the result a title and click on run and it will start training the model on our dataset.

After the model is successfully build we can test our model with the test data.

Input Image:
No alt text provided for this image

Output Image:
No alt text provided for this image


Hope you enjoyed it.

Thank you



要查看或添加评论,请登录

Amit Chaudhary的更多文章

  • ?? Embarking on a Professional Odyssey: Unveiling My Portfolio

    ?? Embarking on a Professional Odyssey: Unveiling My Portfolio

    Greetings, LinkedIn community! I'm thrilled to take you on a journey through my professional life, showcasing the…

  • ANSIBLE TASK 3 CREATING LOAD BALANCER ON AWS USING ANSIBLE

    ANSIBLE TASK 3 CREATING LOAD BALANCER ON AWS USING ANSIBLE

    In this article, you will find how we can create a load balancer on AWS using Ansible. What is Load Balancer? Load…

    1 条评论
  • Launched Ec2 Instance and Configuring Webserver on it using Ansible

    Launched Ec2 Instance and Configuring Webserver on it using Ansible

    Ansible Task 2(Description): Provision EC2 instance through ansible. Retrieve the IP Address of instance using a…

  • Configuring Docker using Ansible

    Configuring Docker using Ansible

    Task 1 - Ansible Write an Ansible PlayBook that performs the following operation in the controlled node: Configure yum…

  • MlOps Task 1

    MlOps Task 1

    Problem Statement: 1.If Developer push to dev branch then Jenkins will fetch from dev and deploy on the dev-docker…

  • MlOps Automation Task 2

    MlOps Automation Task 2

    Problem statement: 1. Create a container image that’s has Jenkins installed using Dockerfile.

  • Face Recognition Using VGG16

    Face Recognition Using VGG16

    Transfer learning makes use of the knowledge gained while solving one problem and applying it to a different but…

社区洞察

其他会员也浏览了