-Condor can help manage GPUs (graphics processing units) in your pool of execute nodes, making them available to jobs that can use them using an API like {link:http://www.khronos.org/opencl/ OpenCL} or {link:http://www.nvidia.com/object/cuda_home_new.html CUDA}.
+HTCondor can help manage GPUs (graphics processing units) in your pool of execute nodes, making them available to jobs that can use them using an API like {link:http://www.khronos.org/opencl/ OpenCL} or {link:http://www.nvidia.com/object/cuda_home_new.html CUDA}.
 
-Condor matches execute nodes (described by ClassAds) to jobs (also described by ClassAds). The general technique to manage GPUs is:
+HTCondor matches execute nodes (described by ClassAds) to jobs (also described by ClassAds). The general technique to manage GPUs is:
 
-1: Advertise the GPU: Configure Condor so that execute nodes include information about available GPUs in their ClassAd.
+1: Advertise the GPU: Configure HTCondor so that execute nodes include information about available GPUs in their ClassAd.
 2: Require a GPU: Jobs modify their Requirements to require a suitable GPU
 3: Identify the GPU: Jobs modify their arguments or environment to learn which GPU it may use.
 
@@ -73,9 +73,9 @@
 
 {subsection:Dynamic advertising}
 
-One step beyond automatic configuration is dynamic configuration.  Instead of a static or automated configuration, Condor itself can run your program and incorporate the information.
-This is {link: http://www.cs.wisc.edu/condor/manual/v7.6/4_4Hooks.html#sec:daemon-classad-hooks Condor's "Daemon ClassAd Hooks" functionality},
-previous known as HawkEye and Condor Cron.  This is the route taken by the {link: http://sourceforge.net/projects/condorgpu/ condorgpu project} (Note that the condorgpu project has no affiliation with Condor.  We have not tested or reviewed that code and cannot promise anything about it!)
+One step beyond automatic configuration is dynamic configuration.  Instead of a static or automated configuration, HTCondor itself can run your program and incorporate the information.
+This is {link: http://www.cs.wisc.edu/condor/manual/v7.6/4_4Hooks.html#sec:daemon-classad-hooks HTCondor's "Daemon ClassAd Hooks" functionality},
+previous known as HawkEye and HTCondor Cron.  This is the route taken by the {link: http://sourceforge.net/projects/condorgpu/ condorgpu project} (Note that the condorgpu project has no affiliation with HTCondor.  We have not tested or reviewed that code and cannot promise anything about it!)
 
 Such a configuration might look something like this, assuming that each machine had at most two GPUs.
 
@@ -116,7 +116,7 @@
 API="CUDA"
 {endcode}
 
-{link: https://lists.cs.wisc.edu/archive/condor-users/2011-March/msg00121.shtml Carsten Aulbert's post "RFC: Adding GPUs into Condor"} includes a program that might make a good starting point for writing output like the above.
+{link: https://lists.cs.wisc.edu/archive/condor-users/2011-March/msg00121.shtml Carsten Aulbert's post "RFC: Adding GPUs into HTCondor"} includes a program that might make a good starting point for writing output like the above.
 
 
 {section: Require a GPU}
@@ -157,4 +157,4 @@
 
 {section: The Future}
 
-The Condor team is working on various improvements in how Condor can manage GPUs.  We're interested in how you are currently using GPUs in your cluster and how you plan on using them.  If you have thoughts or questions, you can post to the public {link: http://www.cs.wisc.edu/condor/mail-lists/ condor-users mailing list}, or {link: http://www.cs.wisc.edu/condor/condor-support/contact us directly}.
+The HTCondor team is working on various improvements in how HTCondor can manage GPUs.  We're interested in how you are currently using GPUs in your cluster and how you plan on using them.  If you have thoughts or questions, you can post to the public {link: http://www.cs.wisc.edu/condor/mail-lists/ condor-users mailing list}, or {link: http://www.cs.wisc.edu/condor/condor-support/contact us directly}.