GPU Servers

GPU Servers


The SCF can help with access to several GPU resources:

  • The SCF operates two GPUs available to all SCF users. You need to use the SLURM scheduling software to run any job making use of the GPU. You may want to use an interactive session to develop and test your GPU code. That same link also has information on monitoring GPU usage of your job.
    1. an NVIDIA Tesla K20Xm with 6 GB memory and 2688 CUDA cores, hosted on one of the nodes of our Linux cluster (scf-sm20).
    2. ​an NVIDIA Titan Xp with 12 GB memory on one of our Linux servers (roo).
  • The SCF also operates the following GPUs. These GPUs are owned by individual faculty members. These servers can be accessed by submitting to the gpu_jsteinhardt or gpu_yugroup partitions (details below) using the SLURM scheduling software (discussed here). Note that if you are not a member of the lab group, your jobs will run on a preemptible basis (your job can be cancelled at any time by a higher-priority job).
    • Steinhardt lab group (partition gpu_jsteinhardt)
      1. a GPU server with 8 GeForce RTX 2080 Ti GPUs each with 11 GB memory (shadowfax).
      2. a GPU server with 2 Quadro RTX 8000 GPUs each with 48 GB memory (smaug).
      3. a GPU server with 8 A100 GPUs each with 40 GB memory (balrog).
    • Yu lab group (partition gpu_yugroup)
      1. an NVIDIA Tesla K80 dual GPU that has two GPUs, each with 12 GB memory and 2496 CUDA cores (hosted on our scf-sm21-gpu server),
      2. an NVIDIA GeForce GTX TITAN X with 12 GB memory (hosted on our merry server),
      3. an NVIDIA Titan Xp with 12 GB memor (hosted on our morgoth server),
      4. an NVIDIA Titan X (Pascal) with 12 GB memory (hosted on our morgoth server)
  • ​Priority access for all department members to 8 GPUs on the campus Savio cluster is available through the SCF condo, and access to additional GPUs is available through the Savio faculty computing allowance. Please contact SCF staff for more information.


We provide the following software that will help you in making use of the GPU:

  • CUDA (version 11.1 (default) and 10.0; other versions can be made available)
  • cuDNN (cuDNN 8.0.5 for CUDA 11.1 (default) and 7.6 (for CUDA 10.0); other versions can be made available)
  • Tensorflow (version 2.4.1 for Python 3.8 and 1.14 for Python 3.7; we can also provide instructions for running Tensorflow through R)
  • Keras (version 2.4.0 for Python 3.8 and 2.3.0 for Python 3.7)
  • PyTorch (version 1.7.0 for Python 3.8 and 1.2.0 for Python 3.7)
  • Theano (version 1.0.4)
  • Caffe (latest version via the BVLC Docker container, with the Python 2.7 interface)
  • PyCUDA (version 2019.1.2)
  • We can install additional or upgrade current software as needed. 

We use Linux environment modules to manage the use of GPU-based software, as discussed next. Note that you could insert any of these commands in your .bashrc (after the stanza involving ~skel/std.bashrc) so they are always in effect or invoke them as needed in a script (including a cluster submission script) or in a terminal session.

For software that uses the GPU (via CUDA) for back-end computations:

  • Tensorflow: invoke "module load tensorflow".
  • PyTorch: simply import torch in Python as with any standard Python package.
  • Theano: invoke "module load theano". You will see a warning about a too-recent version of cuDNN. If this seems to cause problems, let us know.
  • Caffe: contact us for instructions.
  • PyCUDA: invoke "module load pycuda"

To use the software only on the CPU:

  • Tensorflow: simply import tensorflow in Python as with any standard Python package. Note that Tensorflow won't work on arwen, beren, and a few of our other machines with old CPUs, but should work on the cluster nodes as well as on gandalf and radagast among others.
  • PyTorch: simply import torch in Python as with any standard Python package.
  • Theano: do not load the theano module.
  • Caffe: contact us for instructions.

To use program with CUDA and related packages directly, please see this tutorial for more details. You'll need to load CUDA as follows in order to be able to compile and run your code:

  • CUDA: to use CUDA directly in C or another language, invoke "module load cuda".
  • cuDNN: to make use of cuDNN, you need to invoke "module load cudnn".

If you have questions or would like additional GPU-related software installed, please contact