USER_JOB_WRAPPER = /path/to/whole_machine_job_wrapper
 {endcode}
 
+{subsection: "Sticky" Whole-machine slots}
+
+When a whole-machine job is about to start running, it must first wait for all single-core jobs to finish.  If this "draining" of the single-core slots takes a long time, and some slots finish long before others, the machine is not efficiently utilized.  Therefore, to maximize throughput, it is desirable to avoid frequent need for draining.  One way to achieve this is to allow new whole-machine jobs to start running as soon as the previous whole-machine job finishes, without giving single-core jobs an opportunity to start in the same scheduling cycle.  The trade-off in making the whole-machine slots "sticky" is that single-core jobs may get starved for resources if whole-machine jobs keep the system in whole-machine mode.  This can be addressed by limiting the total number of slots that support whole-machine mode.
+
+Condor automatically starts new jobs from the same user as long as CLAIM_WORKLIFE has not expired for the user's claim to the slot.  Therefore, the stickiness of the whole-machine slot for a single user can be controlled most easily with CLAIM_WORKLIFE.  To achieve stickiness across multiple whole-machine users, a different approach is required.  The following policy achieves this by preventing single-core jobs from starting for 10 minutes after the whole-machine slot changes state.  Therefore, when a whole-machine job finishes, there should be ample time for new whole-machine jobs to match to the slot before any single-core jobs are allowed to start.
+
+{code}
+# advertise when slots make state transitions as SlotX_EnteredCurrentState
+STARTD_SLOT_EXPRS = $(STARTD_SLOT_EXPRS) EnteredCurrentState
+
+# Macro for referencing EnteredCurrentState of the whole-machine slot.
+# Relies on eval(), which was added in Condor 7.3.2.
+WHOLE_MACHINE_SLOT_ENTERED_CURRENT_STATE = \
+  eval(strcat("Slot",$(WHOLE_MACHINE_SLOT),"_EnteredCurrentState"))
+
+# The following expression uses LastHeardFrom rather than CurrentTime
+# because the former is stable throughout a matchmaking cycle, whereas
+# the latter changes from moment to moment and therefore leads to
+# unexpected behavior.
+START_SINGLE_CORE_JOB = $(START_SINGLE_CORE_JOB) && \
+  ( isUndefined($(WHOLE_MACHINE_SLOT_ENTERED_CURRENT_STATE)) || \
+    isUndefined(MY.LastHeardFrom) || \
+    MY.LastHeardFrom-$(WHOLE_MACHINE_SLOT_ENTERED_CURRENT_STATE)>600 )
+{endcode}
+
 {subsection: Accounting and Monitoring}
 
 The above policies rely on job suspension.  Should the jobs be "charged" for the time they spend in a suspended state?  This affects the user's fair-share priority and the accumulated number of hours reported by condor_userprio.  As of Condor 7.4, the default behavior is to charge the jobs for time they spend in a suspended state.  There is a configuration variable, NEGOTIATOR_DISCOUNT_SUSPENDED_RESOURCES that can be used to get the opposite behavior.