Select the directory option from the above "Directory" header!

Menu
What is PyTorch? Python machine learning on GPUs

What is PyTorch? Python machine learning on GPUs

PyTorch 1.10 is production ready, with a rich ecosystem of tools and libraries for deep learning, computer vision, natural language processing, and more. Here's how to get started with PyTorch.

PyTorch is an open source, machine learning framework used for both research prototyping and production deployment. According to its source code repository, PyTorch provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration.
  • Deep neural networks built on a tape-based autograd system.

Originally developed at Idiap Research Institute, NYU, NEC Laboratories America, Facebook, and Deepmind Technologies, with input from the Torch and Caffe2 projects, PyTorch now has a thriving open source community. PyTorch 1.10, released in October 2021, has commits from 426 contributors, and the repository currently has 54,000 stars.

This article is an overview of PyTorch, including new features in PyTorch 1.10 and a brief guide to getting started with PyTorch. I've previously reviewed PyTorch 1.0.1 and compared TensorFlow and PyTorch. I suggest reading the review for an in-depth discussion of PyTorch's architecture and how the library works.

The evolution of PyTorch

Early on, academics and researchers were drawn to PyTorch because it was easier to use than TensorFlow for model development with graphics processing units (GPUs). PyTorch defaults to eager execution mode, meaning that its API calls execute when invoked, rather than being added to a graph to be run later. TensorFlow has since improved its support for eager execution mode, but PyTorch is still popular in the academic and research communities. 

At this point, PyTorch is production ready, allowing you to transition easily between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe. The torch.distributed back end enables scalable distributed training and performance optimization in research and production, and a rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, natural language processing, and more.

 Finally, PyTorch is well supported on major cloud platforms, including Alibaba, Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Cloud support provides frictionless development and easy scaling.

What's new in PyTorch 1.10

According to the PyTorch blog, PyTorch 1.10 updates focused on improving training and performance as well as developer usability. See the PyTorch 1.10 release notes for details. Here are a few highlights of this release:

  1. CUDA Graphs APIs are integrated to reduce CPU overheads for CUDA workloads.
  2. Several front-end APIs such as FX, torch.special, and nn.Module parametrization were moved from beta to stable. FX is a Pythonic platform for transforming PyTorch programs; torch.special implements special functions such as gamma and Bessel functions.
  3. A new LLVM-based JIT compiler supports automatic fusion in CPUs as well as GPUs. The LLVM-based JIT compiler can fuse together sequences of torch library calls to improve performance. 
  4. Android NNAPI support is now available in beta. NNAPI (Android’s Neural Networks API) allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including GPUs and specialized neural processing units (NPUs). 

The PyTorch 1.10 release included over 3,400 commits, indicating a project that is active and focused on improving performance through a variety of methods. 

How to get started with PyTorch

Reading the version update release notes won't tell you much if you don't understand the basics of the project or how to get started using it, so let's fill that in.

The PyTorch tutorial page offers two tracks: One for those familiar with other deep learning frameworks and one for newbs. If you need the newb track, which introduces tensors, datasets, autograd, and other important concepts, I suggest that you follow it and use the Run in Microsoft Learn option, as shown in Figure 1. 

what is pytorch fig1 IDG

Figure 1. The "newb" track for learning PyTorch.

If you're already familiar with deep learning concepts, then I suggest running the quickstart notebook shown in Figure 2. You can also click on Run in Microsoft Learn or Run in Google Colab, or you can run the notebook locally. 

what is pytorch fig2 IDG

Figure 2. The advanced (quickstart) track for learning PyTorch.

PyTorch projects to watch

As shown on the left side of the screenshot in Figure 2, PyTorch has lots of recipes and tutorials. It also has numerous models and examples of how to use them, usually as notebooks.  Three projects in the PyTorch ecosystem strike me as particularly interesting: Captum, PyTorch Geometric (PyG), and skorch.

Captum

As noted on this project's GitHub repository, the word captum means comprehension in Latin. As described on the repository page and elsewhere, Captum is "a model interpretability library for PyTorch." It contains a variety of gradient and perturbation-based attribution algorithms that can be used to interpret and understand PyTorch models. It also has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others. 

Figure 3 shows all of the attribution algorithms currently supported by Captum.

what is pytorch fig3 IDG

Figure 3. Captum attribution algorithms in a table format.

PyTorch Geometric (PyG)

PyTorch Geometric (PyG) is a library that data scientists and others can use to write and train graph neural networks for applications related to structured data. As described on its GitHub repository page:

PyG offers methods for deep learning on graphs and other irregular structures, also known as geometric deep learning. In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, multi GPU-support, distributed graph learning via Quiver, a large number of common benchmark datasets (based on simple interfaces to create your own), the GraphGym experiment manager, and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds.

Figure 4 is an overview of PyTorch Geometric's architecture.

what is pytorch fig4 IDG

Figure 4. The architecture of PyTorch Geometric.

skorch

skorch is a scikit-learn compatible neural network library that wraps PyTorch. The goal of skorch is to make it possible to use PyTorch with sklearn. If you are familiar with sklearn and PyTorch, you don’t have to learn any new concepts, and the syntax should be well known. Additionally, skorch abstracts away the training loop, making a lot of boilerplate code obsolete. A simple net.fit(X, y) is enough, as shown in Figure 5.

what is pytorch fig5 IDG

Figure 5. Defining and training a neural net classifier with skorch.

Conclusion

Overall, PyTorch is one of a handful of top-tier frameworks for deep neural networks with GPU support. You can use it for model development and production, you can run it on-premises or in the cloud, and you can find many pre-built PyTorch models to use as a starting point for your own models. 


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments