Introducing PyTorch Across Google Cloud.

Oct. 4, 2018, 11:38 p.m. By: Kirti Bakshi

PyTorch

Along with the release of PyTorch 1.0 Preview, the support for PyTorch throughout Google Cloud’s AI platforms and services is now being broadened.

PyTorch is a deep learning framework that has been designed for easy and flexible experimentation. And with the release of PyTorch 1.0 Preview, the framework now not only supports a fully hybrid Python and C/C++ front-end but also a fast, native distributed execution for production environments.

At Google Cloud, the ultimate aim to support the full spectrum of Machine Learning(ML) practitioners. ML developers make the use many different tools, and they have managed to integrate several of the most popular open source frameworks into their products and services, including TensorFlow, PyTorch, scikit-learn, and XGBoost.

In continuation with the same, Google Cloud now announces support for PyTorch 1.0 on the following:

1. Deep Learning VM Images:

Google Cloud Platform has a provision of a set of virtual machine (VM) images that include everything that an individual might need to get started with various deep learning frameworks. They for a while have provided a community-focused PyTorch VM image, but now are ready to share a new VM image that comprises of PyTorch 1.0 Preview.

This is the fastest way for an individual to try out the latest PyTorch release easily as well as efficiently: They have set up NVIDIA drivers and even pre-installed Jupyter Lab with sample PyTorch tutorials.

2. Kubeflow:

Kubeflow is an open source platform that has been designed to make end-to-end ML pipelines not only easy to deploy but also to manage. Kubeflow already is capable of supporting PyTorch, and the Kubeflow community has also developed a PyTorch package that with just two commands can be installed in a Kubeflow deployment.

In addition to this, in collaboration with NVIDIA, the TensorRT package has been extended in Kubeflow in order to support serving PyTorch models. The aim is to make Kubeflow to be the easiest way to build PyTorch pipelines that are portable, scalable, composable and that run everywhere.

3. TensorBoard integration:

PyTorch users have repeatedly stated that they would appreciate a deeper integration with TensorBoard that is a popular suite of machine learning visualization tools. And this is thought of as a great idea. Keeping the same in mind, the TensorBoard and PyTorch developers are now collaborating to make the use of TensorBoard to monitor PyTorch training simpler.

4. PyTorch on Cloud TPUs:

Over the past several years, much of the tremendous progress in machine learning has been driven by a great increase in the amount of computing power that can be harnessed in order to train and run ML models. And this change has motivated Google to develop three generations of custom ASICs called “Tensor Processing Units,” or TPUs, that are specialized for machine learning. The second and third generations of these chips have been brought to Google Cloud as Cloud TPUs.

They are also pleased to announce that engineers on Google’s TPU team, in order to connect PyTorch to Cloud TPUs are actively collaborating with core PyTorch developers. The long-term goal is to enable everyone to enjoy while benefiting from the performance, scalability, and cost-efficiency of Cloud TPUs, the simplicity and flexibility of PyTorch.

And to start the same, the engineers that are involved have produced a prototype that connects PyTorch to Cloud TPUs via an open source linear algebra compiler: XLA.

This prototype has successfully enabled us to train on a Cloud TPU, a PyTorch implementation of ResNet-50, and they are further planning to open source the prototype and then expand it in collaboration with the PyTorch community.

And also, 1.0 is just the beginning, there is a lot more to come.

For More Information: Click Here