{section: How to allow some jobs to claim the whole machine instead of one slot}
 
-Known to work with Condor version: 7.4
+Known to work with HTCondor version: 7.4
 
 The simplest way to achieve this is this:
 
@@ -23,7 +23,7 @@
 Requirements = CAN_RUN_WHOLE_MACHINE
 {endcode}
 
-Then put the following in your Condor configuration file.  Make sure it either comes after the attributes to which this policy appends (such as START) or that you merge the definitions together.
+Then put the following in your HTCondor configuration file.  Make sure it either comes after the attributes to which this policy appends (such as START) or that you merge the definitions together.
 
 {code}
 
@@ -52,7 +52,7 @@
 STARTD_SLOT_EXPRS = $(STARTD_SLOT_EXPRS) State
 
 # Macro for referencing state of the whole-machine slot.
-# Relies on eval(), which was added in Condor 7.3.2.
+# Relies on eval(), which was added in HTCondor 7.3.2.
 WHOLE_MACHINE_SLOT_STATE = \
   eval(strcat("Slot",$(WHOLE_MACHINE_SLOT),"_State"))
 
@@ -192,14 +192,14 @@
 
 When a whole-machine job is about to start running, it must first wait for all single-core jobs to finish.  If this "draining" of the single-core slots takes a long time, and some slots finish long before others, the machine is not efficiently utilized.  Therefore, to maximize throughput, it is desirable to avoid frequent need for draining.  One way to achieve this is to allow new whole-machine jobs to start running as soon as the previous whole-machine job finishes, without giving single-core jobs an opportunity to start in the same scheduling cycle.  The trade-off in making the whole-machine slots "sticky" is that single-core jobs may get starved for resources if whole-machine jobs keep the system in whole-machine mode.  This can be addressed by limiting the total number of slots that support whole-machine mode.
 
-Condor automatically starts new jobs from the same user as long as CLAIM_WORKLIFE has not expired for the user's claim to the slot.  Therefore, the stickiness of the whole-machine slot for a single user can be controlled most easily with CLAIM_WORKLIFE.  To achieve stickiness across multiple whole-machine users, a different approach is required.  The following policy achieves this by preventing single-core jobs from starting for 10 minutes after the whole-machine slot changes state.  Therefore, when a whole-machine job finishes, there should be ample time for new whole-machine jobs to match to the slot before any single-core jobs are allowed to start.
+HTCondor automatically starts new jobs from the same user as long as CLAIM_WORKLIFE has not expired for the user's claim to the slot.  Therefore, the stickiness of the whole-machine slot for a single user can be controlled most easily with CLAIM_WORKLIFE.  To achieve stickiness across multiple whole-machine users, a different approach is required.  The following policy achieves this by preventing single-core jobs from starting for 10 minutes after the whole-machine slot changes state.  Therefore, when a whole-machine job finishes, there should be ample time for new whole-machine jobs to match to the slot before any single-core jobs are allowed to start.
 
 {code}
 # advertise when slots make state transitions as SlotX_EnteredCurrentState
 STARTD_SLOT_EXPRS = $(STARTD_SLOT_EXPRS) EnteredCurrentState
 
 # Macro for referencing EnteredCurrentState of the whole-machine slot.
-# Relies on eval(), which was added in Condor 7.3.2.
+# Relies on eval(), which was added in HTCondor 7.3.2.
 WHOLE_MACHINE_SLOT_ENTERED_CURRENT_STATE = \
   eval(strcat("Slot",$(WHOLE_MACHINE_SLOT),"_EnteredCurrentState"))
 
@@ -215,7 +215,7 @@
 
 {subsection: Accounting and Monitoring}
 
-The above policies rely on job suspension.  Should the jobs be "charged" for the time they spend in a suspended state?  This affects the user's fair-share priority and the accumulated number of hours reported by condor_userprio.  As of Condor 7.4, the default behavior is to charge the jobs for time they spend in a suspended state.  There is a configuration variable, NEGOTIATOR_DISCOUNT_SUSPENDED_RESOURCES that can be used to get the opposite behavior.
+The above policies rely on job suspension.  Should the jobs be "charged" for the time they spend in a suspended state?  This affects the user's fair-share priority and the accumulated number of hours reported by condor_userprio.  As of HTCondor 7.4, the default behavior is to charge the jobs for time they spend in a suspended state.  There is a configuration variable, NEGOTIATOR_DISCOUNT_SUSPENDED_RESOURCES that can be used to get the opposite behavior.
 
 Should the whole-machine slot charge more than the single-core slots?  The policy for this is determined by =SlotWeight=.  By default, this is equal to the number of cores associated with the slot, so usage reported in condor_userprio will count the whole-machine slot on an 8-core machine as 8 times the usage reported for a single-core slot.