Define =GPUs= as a custom resource by adding the following definitions to the configuration of the execute node.
 
-    MACHINE_RESOURCE_GPUs = $(LIBEXEC)/condor_gpu_discovery -properties
+    MACHINE_RESOURCE_INVENTORY_GPUs = $(LIBEXEC)/condor_gpu_discovery -properties
     ENVIRONMENT_FOR_AssignedGPUs = CUDA_VISIBLE_DEVICES, GPU_DEVICE_ORDINAL
 
 
-=MACHINE_RESOURCE_GPUs= tells HTCondor to run the _condor_gpu_discovery_ tool, and use its output to define a custom resource called =GPUs=.
+=MACHINE_RESOURCE_INVENTORY_GPUs= tells HTCondor to run the _condor_gpu_discovery_ tool, and use its output to define a custom resource called =GPUs=.
 
 =ENVIRONMENT_FOR_AssignedGPUs= tells HTCondor to publish the value of machine ClassAd attribute =AssignedGPUs= for a slot in the job's environment using the environment variables =CUDA_VISIBLE_DEVICES= and =GPU_DEVICE_ORDINAL=. If you know for certain that your devices will be CUDA, then you can omit =GPU_DEVICE_ORDINAL= in the configuration above.  If you know for certain that your devices are OpenCL only, then you can omit =CUDA_VISIBLE_DEVICES=. In addition, =AssignedGPUs= will always be published into the job's environment as =_CONDOR_AssignedGPUs=, so the second line above is not strictly necessary, but it is recommended.