Deploying Deep Learning models in production using PyTorch

Deploying Deep Learning models in production using PyTorch

The answer was originally written on Quora here.

PyTorch is the most productive and easy-to-use framework according to me. It is very easy to deploy in production for medium sized deployments in form of the pytorch library we know. (A lot of our deployments at ParallelDots are simple PyTorch forward prop codes combined with a Python web service ).

However, it lacked a way to host at super huge scales. Tensorflow had a initial gain here as Google had taken care about deployments from beginning.

However, PyTorch has now got its own set of cool tools for super demanding deployments.

  1. The most obvious way for huge deployments is training models in PyTorch and deploying in Caffe2 (newer fork Caffe). Both the frameworks are from Facebook and a lot of networks written in PyTorch can be deployed in Caffe2 Synced | Caffe2 Merges With PyTorch . Here is a tutorial about how to achieve this: Transfering a model from PyTorch to Caffe2 and Mobile using ONNX . The technology enabling this is ONNX: Open Neural Network Exchange Format which aims to make Deep Learning frameworks play well with each other.
  2. Using Glow (pytorch/glow), you can extract models as executables which you can use on multiple devices. Check this out: pytorch/glow.

要查看或添加评论,请登录

Muktabh Mayank的更多文章

社区洞察

其他会员也浏览了