Deep Learning and AI

Introduction to PyTorch with Tutorial

February 1, 2022 • 8 min read

SPC-Blog-Intro-to-PyTorch.jpg

Getting started with PyTorch

When you want to work on deep learning projects there are a number of popular frameworks to choose from including TensorFlow and today’s subject: PyTorch.

pytorch.png

This framework started life with Facebook’s AI Research (FAIR) team and is now an open source project from the company. It’s a popular tool that many companies use, or have used, for their own work on deep learning and neural networks including Tesla, Hugging Face, and Salesforce. 

It also has its own spin-off called PyTorch Lightning, which makes working with PyTorch even easier as it removes much of the boilerplate as you scale your research.

What is PyTorch

PyTorch is based on an earlier project called Torch, an open-source machine learning library that ceased active development in 2015. Torch used a scripting language called Lua, which also has many fans, but the reality is that Python is far more popular with data science researchers. Thus it made sense that someone would create a Python-accessible version of Torch.

That was just the beginning, however, and while Torch and PyTorch have shared elements, PyTorch is its own framework and not just a simple Python wrapper over Torch.

The PyTorch project bills its software as serving two primary purposes. The first is as a replacement for NumPy that focuses on using GPU processing power and other accelerators. The reason GPU computing is so prized is that a graphics card is excellent at doing basic calculations in parallel, a key advantage for deep learning projects. For larger projects, GPU computing power is much stronger than relying solely on a single or multiple CPUs.

PyTorch also acts as an automatic differentiation library for implementing neural networks. Automatic differentiation is a critical part of deep learning for predicting and improving errors, or loss.

PyTorch also supports dynamic computation graphs as opposed to static ones. Computational graphs are a way to represent what your neural network is doing, or should do. Dynamic graphs can be defined on the fly, while static graphs need to be fully defined before running. There are advantages and disadvantages to both types of graphs, but basically static graphs are more easily optimized (in general), while dynamic graphs allow for greater flexibility.

The building blocks of PyTorch, like TensorFlow and most machine learning tools, are tensors. The PyTorch tensor is similar to a NumPy array. If you asked PyTorch to create a tensor it might look something like this:

tensor([[0.6480, 0.0099, 0.4249, 0.8693],
        [0.4715, 0.3549, 0.5658, 0.6760],
        [0.1889, 0.2194, 0.4347, 0.6053]])

As you can see there are three sets of numbers with each set containing four numbers. Tensors are important, because they are used to encode inputs and outputs of a model, as well as the model's parameters.

Installing PyTorch

    Before you can get started with PyTorch you need the Python 3 scripting language on your system, hence the “Py” in PyTorch. 

    If you’re running macOS or Linux (including the Windows Subsystem for Linux) then you already have Python installed on your computer. To run Python on a Windows system natively you’ll need to install it from the Python website.

    To install PyTorch itself the easiest method is to use Anaconda, the popular data science toolkit. Anaconda gives you easy access to an absolute ton of data science tools, from Jupyter Notebooks to any number of frameworks and libraries.

    anaconda.png

    How you install Anaconda depends on your operating system. If you're using Windows then you install Anaconda using the installer, as detailed on the Anaconda website. Linux users must install a number of dependencies from their command line package manager, and then run a bash script that downloads and installs the program, again as detailed on the Anaconda website. Finally, macOS users can install using the graphical installer.

    Once Anaconda is installed, run the command detailed on PyTorch’s website based on your operating system. As of this writing the commands are the following (Note that you must be running these from an Anaconda prompt, not a regular command line):

    Windows*

    conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

    Linux

    conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

    macOS**

    conda install pytorch torchvision torchaudio -c pytorch

    * If you have trouble installing PyTorch on Windows via Anaconda, start the Anaconda command prompt as an administrator: Start > Anaconda3. Then right-click on "Anaconda Prompt (Anaconda3)," select “More” and then choose "Run as administrator."

    ** macOS binaries don't support CUDA (using NVIDIA's GPUs for processing). If CUDA is something you require then you must install from source on Mac.

    If you don’t want to use Anaconda, then you can install PyTorch via pip using the commands below.

    Windows

    pip3 install torch==1.10.1+cu102 torchvision==0.11.2+cu102 torchaudio===0.10.1+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html

    Linux (including WSL)

    pip3 install torch torchvision torchaudio

     macOS (no CUDA)

    pip3 install torch torchvision torchaudio

    Finally, if you’re not interested in running PyTorch directly on your own hardware then the easiest option is to run it with Google Colab (the company’s take on Jupyter Notebooks in the cloud) by using the Linux pip3 command like so in the first cell:

    !pip3 install torch torchvision torchaudio

    Google Colab maintains limits on the amount of computing power that you can use for free. Unfortunately, the company does not publish what those limits are. Colab notebooks will also stop working if they are left idle for too long (notebooks can usually run for up to 12 hours at a time).

    colab.png

    If you want to use Colab we'd suggest avoiding it for larger projects when more robust local hardware is preferred, unless you want to pay for the pro version. 

    For those times when higher powered local hardware is not available to you, such as when traveling with an underpowered laptop, then Colab can be a quick and easy solution to get some work done on small- to medium-sized projects.

    Getting Started with a Sample PyTorch Project

    Before getting started on a test run of PyTorch it's important to make sure that PyTorch is recognized by your system. The simplest way to do this is to open a python interactive shell (use the Anaconda shell if you installed via Anaconda):

    python3

    or

    python

    Then type out the following hitting Enter after each line:

    import torch
    print(torch.__version__)

    The result should be a number such as:

    1.10.1

    If that works, let's try something really simple using some basic math to make sure everything's working.

    Like any other Python script this example starts with importing your tools just as we saw above when we checked that PyTorch was installed.

    import torch

    We’re going to keep this example incredibly simple so all we'll need is PyTorch itself. As you’ve seen in these examples we import PyTorch by calling it "torch." As you'll remember, PyTorch is based on an earlier machine learning library called Torch, and the import command uses the simpler name.

    Next, let’s define some tensors, which are like numpy arrays (basically a grid full of numbers).

    a = torch.rand(3,3) 
    b = torch.rand(3,3)

    This code tells the computer to create two tensors with randomly generated numbers. Each tensor will have three sets of numbers containing three numbers each. The computer also assigns these tensors the variable names "a" and "b."

    Next, we'll do something with them. First let's print out the arrays.

    print (a) 
    print (b)

    Now, let's do some basic math with our tensors.

    print(f'This is what happens when we add the tensors:{a+b}') 
    print(f'This is what happens when we multiply the tensors:{a*b}') 
    print(f'This is what happens when we divide the tensors:{a/b}')

    If you get a set of results with no errors then you are good to go, and PyTorch is working.

    That's an extremely basic example that uses the most rudimentary features of PyTorch, but the point was to make sure your install is working so you can do more advanced work. From here, you can work on all kinds of projects such as image classifiers, natural language processing, speech recognition systems, and more.

    Happy PyTorching!


    Interested in more PyTorch tutorials? Check out the PyTorch website for more great walkthroughs. They also have a resource to discover pre-trained models to use in your own research or publish your own in the pre-trained model repository.

    Feel free to contact us if you have any questions or take a look at our Deep Learning Solutions if you're interested in a workstation or server to run PyTorch on.


    Tags

    pytorch

    getting started with pytorch

    pytorch tutorial

    ai

    machine learning

    ml

    deep learning

    pip

    anaconda



    Related Content