{code}
 +WantParallelSchedulingGroups = True
 {endcode}
-For more info on parallel scheduling groups, see section 3.12.9.4, Condor's Dedicated Scheduling section in the Condor Manual.
+For more info on parallel scheduling groups, see section 3.12.9.4, HTCondor's Dedicated Scheduling section in the HTCondor Manual.
 
-If you don't wish to require packing all nodes of a parallel job onto the same physical machine, you could tell Condor to attempt to use the least number of physical machines to run a parallel job as follows:
+If you don't wish to require packing all nodes of a parallel job onto the same physical machine, you could tell HTCondor to attempt to use the least number of physical machines to run a parallel job as follows:
 
 {code}
 ##  The NEGOTIATOR_PRE_JOB_RANK expression overrides all other ranks
@@ -35,4 +35,4 @@
 
 The above example assumes you have a global unique ordering of your physical machines as an integer in machine ad attribute {quote: MachineId}. If you cannot easily add such a global ordering, perhaps you could generate one via a classad regex on the IP address or some such. I suppose the above examples could be improved by also having non-parallel universe jobs prefer the lowest numbered {quote: MachineId} machines.
 
-BTW, if you come up with a policy that combines both parallel scheduling groups AND job ranks, be aware - I don't think that using both of these mechanisms at the same time worked until Condor v7.5.6.
+BTW, if you come up with a policy that combines both parallel scheduling groups AND job ranks, be aware - I don't think that using both of these mechanisms at the same time worked until HTCondor v7.5.6.