This page contains strategies for managing GPUs for HTCondor versions prior to 8.1.4. If you are using 8.1.4 an later the strategy described in HowToManageGpus is preferred.

HTCondor can help manage GPUs (graphics processing units) in your pool of execute nodes, making them available to jobs that can use them using an API like OpenCL or CUDA.

HTCondor matches execute nodes (described by ClassAds) to jobs (also described by ClassAds). The general technique to manage GPUs is:

  1. Advertise the GPU: Configure HTCondor so that execute nodes include information about available GPUs in their ClassAd.
  2. Require a GPU: Jobs modify their Requirements to require a suitable GPU
  3. Identify the GPU: Jobs modify their arguments or environment to learn which GPU it may use.

This technique builds on the techniques in How to reserve a slot or machine for special jobs.

Advertising the GPU

A key challenge of advertising GPUs is that a GPU can only be used by one job at a time. If an execute node has multiple slots (a likely case!), you'll want to limit each GPU to only being advertised to a single slot.

You have several options for advertising your GPUs. In increasing order of complexity they are:

  1. Static configuration
  2. Automatic configuration
  3. Dynamic advertising

This progression may be a useful way to do initial setup and testing. Start with a static configuration to ensure everything works. Move to an automatic configuration to develop and test partial automation. Finally a few small changes should make it possible to turn your automatic configuration into dynamic advertising.

Static configuration

If you have a small number of nodes, or perhaps a large number of identical nodes, you can add static attributes manually using STARTD_ATTRS on a per slot basis. In the simplest case, it might just be:

SLOT1_HAS_GPU=TRUE
SLOT1_GPU_DEV=0
STARTD_ATTRS=HAS_GPU,GPU_DEV

This limits the GPU to only being advertised by the first slot. A job can use HAS_GPU to identify available slots with GPUs. The job can use GPU_DEV to identify which GPU device to use. (A job could use the presence of GPU_DEV to identify slots with GPUs instead of HAS_GPU, but "HAS_GPU" is a bit easier to read than "(GPU_DEV=!=UNDEFINED)"

If you have two GPUs, you might give the first two slots a GPU each.

SLOT1_HAS_GPU=TRUE
SLOT1_GPU_DEV=0
SLOT2_HAS_GPU=TRUE
SLOT2_GPU_DEV=1
STARTD_ATTRS=HAS_GPU,GPU_DEV

You can also provide more information about your GPUs so that a job can distinguish between different GPUs:

SLOT1_GPU_CUDA_DRV=3.20
SLOT1_GPU_CUDA_RUN=3.20
SLOT1_GPU_DEV=0
SLOT1_GPU_NAME="Tesla C2050"
SLOT1_GPU_CAPABILITY=2.0
SLOT1_GPU_GLOBALMEM_MB=2687
SLOT1_GPU_MULTIPROC=14
SLOT1_GPU_NUMCORES=32
SLOT1_GPU_CLOCK_GHZ=1.15
SLOT1_GPU_API="CUDA"
STARTD_ATTRS = GPU_DEV, GPU_NAME, GPU_CAPABILITY, GPU_GLOBALMEM_MB, \
  GPU_MULTIPROC, GPU_NUMCORES, GPU_CLOCK_GHZ, GPU_CUDA_DRV, \
  GPU_CUDA_RUN, GPU_MULTIPROC, GPU_NUMCORES, GPU_API

(The above is from Carsten Aulbert's post "RFC: Adding GPUs into Condor".)

Automatic configuration

You can write a program to write your configuration file. This is still using STARTD_ATTRS, but potentially scales better for mixed pools. For an extended example, see Carsten Aulbert's post "RFC: Adding GPUs into Condor" in which he does exactly this.

Dynamic advertising

One step beyond automatic configuration is dynamic configuration. Instead of a static or automated configuration, HTCondor itself can run your program and incorporate the information. This is HTCondor's "Daemon ClassAd Hooks" functionality, previous known as HawkEye and HTCondor Cron. This is the route taken by the condorgpu project (Note that the condorgpu project has no affiliation with HTCondor. We have not tested or reviewed that code and cannot promise anything about it!)

Such a configuration might look something like this, assuming that each machine had at most two GPUs.

STARTD_CRON_JOBLIST = $(STARTD_CRON_JOBLIST) GPUINFO1
STARTD_CRON_GPUINFO1_MODE = OneShot
STARTD_CRON_GPUINFO1_RECONFIG_RERUN = FALSE
STARTD_CRON_GPUINFO1_PREFIX = GPU_
STARTD_CRON_GPUINFO1_EXECUTABLE = $(MODULES)/get-gpu-info
# which device should get-gpu-info probe?
STARTD_CRON_GPUINFO1_PARAM0 = 0
STARTD_CRON_GPUINFO1_SLOTS = 1

STARTD_CRON_JOBLIST = $(STARTD_CRON_JOBLIST) GPUINFO2
STARTD_CRON_GPUINFO2_MODE = OneShot
STARTD_CRON_GPUINFO2_RECONFIG_RERUN = FALSE
STARTD_CRON_GPUINFO2_PREFIX = GPU_
STARTD_CRON_GPUINFO2_EXECUTABLE = $(MODULES)/get-gpu-info
# which device should get-gpu-info probe?
STARTD_CRON_GPUINFO2_PARAM0 = 1
STARTD_CRON_GPUINFO2_SLOTS = 2

$(MODULES)/get-gpu-info will be invoked twice, once for each of the two possible GPUs. (You can support more by copying the above entries and increasing the integers. #2196, if implemented, may allow for a simpler configuration.) get-gpu-info will be passed the device ID to probe (0 or 1). The output should be a ClassAd; entries will have GPU_ prepended, then they will be added to to slot ClassAds for slots 1 and 2.

get-gpu-info would write output to its standard output that looked something like:

CUDA_DRV=3.20
CUDA_RUN=3.20
DEV=0
NAME="Tesla C2050"
CAPABILITY=2.0
GLOBALMEM_MB=2687
MULTIPROC=14
NUMCORES=32
CLOCK_GHZ=1.15
API="CUDA"

Carsten Aulbert's post "RFC: Adding GPUs into HTCondor" includes a program that might make a good starting point for writing output like the above.

Require a GPU

User jobs that require a GPU must specify this requirement. In a job's submit file, it might do something as simple as

Requirements=HAS_GPU

or as complex as

Requirements=HAS_GPU \
    && (GPU_API == "CUDA") \
    && (GPU_NUM_CORES >= 16) \
    && regexp("Tesla", NAME, "i")

specifying that the job requires the CUDA GPU API (as opposed to OpenCL or another), that it wants a GPU with at least 16 cores, and it wants a GPU with a name of "Tesla".

Identify the GPU

Once a job matches to a given slot, it needs to know which GPU to use, if multiple are present. Assuming the slot advertised the information, you can access it through the job's arguments or the environment using The $$() syntax. For example, if your job takes an argument "--device=X" where X is the device to use, you might do something like

arguments = "--device=$$(GPU_DEV)"

Or your job might look to the environment variable GPU_DEVICE_ID:

environment = "GPU_DEVICE_ID=$$(GPU_DEV)"

The Future

The HTCondor team is working on various improvements in how HTCondor can manage GPUs. We're interested in how you are currently using GPUs in your cluster and how you plan on using them. If you have thoughts or questions, you can post to the public condor-users mailing list, or us directly.