T2T: make Deep Learning more accessible and accelerate research

T2T: make Deep Learning more accessible and accelerate research

Tensor2Tensor (T2T) is a library built on top of TensorFlow. It consists of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. T2T is actively used and maintained by researchers and engineers within the Google Brain team and a community of users.

Research process could be annoying due to reimplementing data download and pre-processing, reimplementing hyper parameter tuning and running on different hardware configurations. 

T2T provides ready to use a bunch of standard models for standard tasks, includes the data and pre-processing in a more organized fashion. This enables researchers to load and preprocess datasets, use and create new models and data sets, and share the work with community. 

  • Datasets: ImageNet, CIFAR, MNIST, COCO, WMT, LM1B, …
  • Models and hyper-parameter sets: Transformer, ResNet, RevNet, ShakeShake, Xception, SliceNet, ByteNet, …
  • Scripts: training, decoding, exporting, …
  • Scale: multi-GPU, Cloud TPU, TPU Pods, …


T2T has more than 100 contributors inside and outside Google, more than 130K downloads. 


What about simpler tasks: 

  • Get dataset: T2T knows from where to get the data, and how to pre-process it.
  • Use models as Keras layers: Take models and plug them to others same as layers 
  • Create your own datasets and models 

What is next for T2T

  • Video Models 
  • RL and GANs

Video Tensor2Tensor (TensorFlow @ O’Reilly AI Conference, San Francisco '18)

Regards


要查看或添加评论,请登录

Ibrahim Sobh - PhD的更多文章

社区洞察

其他会员也浏览了