TensorFlow vs PyTorch

TensorFlow vs PyTorch

PyTorch and Tensorflow are two deep learning frameworks used in the creation of neural networks; they have largely the same function and intended purpose. However, the major difference between the two is the nature of compiling and computing computation graph.

TensorFlow and Static Graph Construction

In Tensorflow, a symbolic graph is created first, meaning that it represents all the operations that are to be conducted. Then, following compilation, actual values can be substituted, and numerical results for output can be computed. This is advantageous because the graph only needs to be computed once, rather than dynamically every time an output is to be produced. This is known as static construction, as the graph is created once and only once.

PyTorch and Imperative Programming

In PyTorch, however, the graph is created dynamically, meaning that there is no need to compile the graph upon finalization, known as imperative programming. This is much more compatible with the Python language, since the output at every step is simply a numpy array, and can be printed. This makes PyTorch compatible with the pdb Python debugger. However, the primary disadvantage of PyTorch, is that the graph is recreated every time an input is passed in. For instance, in training, a new graph is created every time a new input is passed. For this reason, PyTorch isn't as optimal when it comes to inferencing.

Eager Execution: TensorFlow Strikes Back

In response to the rise of PyTorch, Tensorflow came up with a new feature, Eager Execution, which mimics the dynamic graph of PyTorch. While in a traditional TensorFlow session, a static graph is first created, Eager Execution creates an imperative programming environment similar to PyTorch, in which the graph is created on the fly, enabling easier debugging and more intuitive control with standard Python statements.

In Summary...

While Tensorflow and PyTorch are both excellent deep learning frameworks, they are best suited for different settings. Tensorflow is effective when a limited set of operations need to be performed, and is portable across different supported languages, thanks to the universal protocol buffer file. On the other hand, PyTorch creates the compute graph dynamically, making it easier to use in-built Python control structures and the pdb debugger. It comes down to the use case for which you intend to make your deep learning application. They are two different means to achieve the ultimate goal of creating a deep neural network.

References

[1] https://www.edureka.co/blog/pytorch-vs-tensorflow/

[2] https://www.tensorflow.org/guide/eager

要查看或添加评论,请登录

Ravit Sharma的更多文章

社区洞察

其他会员也浏览了