Why you should choose PyTorch over Tensorflow in your researches
- Iteration time is faster in PyTorch, Because of deferred execution model everything takes longer in TF
- More integrated with numpy
- Primarily debugging: As a researcher you care a lot about turnaround time and debugging time of your models.
- It’s much easier to build dynamic graphs in PyTorch right now which allows for a certain class of models to be implemented much faster and simpler
- TF is geared towards static graphs with quite clunky and difficult to use and debug dynamic constructs
- For lower-level developer perspective, developing custom operations is much simpler/faster in PyTorch
- TF custom ops require a lot more boilerplate code and the source is much harder to navigate
- The documentation for internal/C++ API is nowhere near as good as the Python which makes building custom operations in TF extra costly.
- TF is a solid piece of engineering, it’s overall lack of pursuit of simplicity/Ockham’s razor in it’s design is evident throughout the entire framework and it’s use.
- Tensorflow is a solid piece of engineering, it’s overall lack of pursuit of simplicity/Ockham’s razor in it’s design is evident throughout the entire framework and it’s use.