-* WORK IN PROGRESS *
-
 Condor can help manage GPUs (graphics processing units) in your pool of execute nodes, making them available to jobs that can use them using an API like {link:http://www.khronos.org/opencl/ OpenCL} or {link:http://www.nvidia.com/object/cuda_home_new.html CUDA}.
 
 Condor matches execute nodes (described by ClassAds) to jobs (also described by ClassAds). The general technique to manage GPUs is:
@@ -55,9 +53,10 @@
 SLOT1_GPU_MULTIPROC=14
 SLOT1_GPU_NUMCORES=32
 SLOT1_GPU_CLOCK_GHZ=1.15
+SLOT1_GPU_API="CUDA"
 STARTD_ATTRS = GPU_DEV, GPU_NAME, GPU_CAPABILITY, GPU_GLOBALMEM_MB, \
   GPU_MULTIPROC, GPU_NUMCORES, GPU_CLOCK_GHZ, GPU_CUDA_DRV, \
-  GPU_CUDA_RUN, GPU_MULTIPROC, GPU_NUMCORES
+  GPU_CUDA_RUN, GPU_MULTIPROC, GPU_NUMCORES, GPU_API
 {endcode}
 
 (The above is from {link: https://lists.cs.wisc.edu/archive/condor-users/2011-March/msg00121.shtml Carsten Aulbert's post "RFC: Adding GPUs into Condor"}.)
@@ -76,12 +75,76 @@
 This is {link: http://www.cs.wisc.edu/condor/manual/v7.6/4_4Hooks.html#sec:daemon-classad-hooks Condor's "Daemon ClassAd Hooks" functionality},
 previous known as HawkEye and Condor Cron.  This is the route taken by the {link: http://sourceforge.net/projects/condorgpu/ condorgpu project} (Note that the condorgpu project has no affiliation with Condor.  We have not tested or reviewed that code and cannot promise anything about it!)
 
+Such a configuration might look something like this, assuming that each machine had at most one GPU.
 
-{section: The Future}
+{code}
+STARTD_CRON_JOBLIST = $(STARTD_CRON_JOBLIST) GPUINFO1
+STARTD_CRON_GPUINFO1_MODE = OneShot
+STARTD_CRON_GPUINFO1_RECONFIG_RERUN = FALSE
+STARTD_CRON_GPUINFO1_PREFIX = GPU_
+STARTD_CRON_GPUINFO1_EXECUTABLE = $(MODULES)/get-gpu-info
+STARTD_CRON_GPUINFO1_KILL = True
+# which device should get-gpu-info probe?
+STARTD_CRON_GPUINFO1_PARAM0 = 0
+STARTD_CRON_GPUINFO1_SLOTS = 1
+{endcode}
+
+This will call $(MODULES)/get-gpu-info with an argument of "0" (indicating that it should check the first GPU device).  The output will be prepended with "GPU_" and advertised for slot 1 only.  If a machine might have more GPUs, you'll need to repeat this section, changing all the "1" to "2", then "3" and so on, and changing the "0" to "1", then "2" and so on.
+
+get-gpu-info would write output to its standard output that looked something like:
+
+{code}
+CUDA_DRV=3.20
+CUDA_RUN=3.20
+DEV=0
+NAME="Tesla C2050"
+CAPABILITY=2.0
+GLOBALMEM_MB=2687
+MULTIPROC=14
+NUMCORES=32
+CLOCK_GHZ=1.15
+API="CUDA"
+{endcode}
 
-The Condor team is working on various improvements in how Condor can manage GPUs.  If you have
-TODO: condor-admin/condor-users, link to ticket when made public.
+{link: https://lists.cs.wisc.edu/archive/condor-users/2011-March/msg00121.shtml Carsten Aulbert's post "RFC: Adding GPUs into Condor"} includes a program that might make a good starting point for writing output like the above.
 
-{section: Credits}
 
-Several examples were drawn from {link: https://lists.cs.wisc.edu/archive/condor-users/2011-March/msg00121.shtml Carsten Aulbert's post "RFC: Adding GPUs into Condor"} sent to the {link: http://www.cs.wisc.edu/condor/mail-lists/ condor-users} mailing list on March 25th, 2011.
+{section: Require a GPU}
+
+User jobs that require a GPU must specify this requirement.  In a job's submit file, it might do something as simple as
+
+{code}
+Requirements=HAS_GPU
+{endcode}
+
+or as complex as
+
+{code}
+Requirements=HAS_GPU \
+    && (GPU_API == "CUDA") \
+    && (GPU_NUM_CORES >= 16) \
+    && regexp("Tesla", NAME, "i")
+{endcode}
+
+specifying that the job requires the CUDA GPU API (as opposed to OpenCL or another), that it wants a GPU with at least 16 cores, and it wants a GPU with a name of "Tesla".
+
+
+
+{section: Identify the GPU}
+
+Once a job matches to a given slot, it needs to know which GPU to use, if multiple are present.  Assuming the slot advertised the information, you can access it through the job's arguments or the environment using The $$() syntax.  For example, if your job takes an argument "--device=X" where X is the device to use, you might do something like
+
+{code}
+arguments = "--device=$$(GPU_DEV)"
+{endcode}
+
+Or your job might look to the environment variable GPU_DEVICE_ID:
+
+{code}
+environment = "GPU_DEVICE_ID=$$(GPU_DEV)"
+{endcode}
+
+
+{section: The Future}
+
+The Condor team is working on various improvements in how Condor can manage GPUs.  We're interested in how you are currently using GPUs in your cluster and how you plan on using them.  If you have thoughts or questions, you can post to the public {link: http://www.cs.wisc.edu/condor/mail-lists/ condor-users mailing list}, or {link: http://www.cs.wisc.edu/condor/condor-support/contact us directly at}.