The HTCondor negotiator matchmaking logic is aware of a Consumption Policy (CP) detected on a p-slot.  When a job matches against a p-slot with a CP, the amount of each resource dictated by its consumption policy is deducted from that p-slot.  The p-slot then remains available to match with another job.  In other words: *Consumption Policies allow multiple jobs to be matched against a single partitionable slot during a negotiation cycle*.  When the HTCondor startd is allocating a claim to a new match, the same Consumption Policy expressions are also evaluated to determine the resources that are subtracted from the partitionable slot (and added to the corresponding new dynamic slot).
 
-{section: Motivations}
-Consumption Policies enable some appealing features, including support for pools with heterogeneous resource models, faster loading of partitionable slots, and more flexible utilization of accounting group quotas.
+{subsection: Motivation}
+A partitionable slot presents resources that can service multiple jobs (frequently, all the resources available on a particular execute node).  However, the negotiator historically only matches one job to a slot per negotiation cycle -- that is, only a single request can be matched to a p-slot during each cycle, regardless of how many requests *might* be serviced.
 
-{subsubsection: Heterogeneous Resource Models}
+{subsubsection: current solutions}
+The two existing mechanisms for assigning p-slot resources to multiple jobs are:
+1: Assign resources over multiple negotiation cycles
+1: Allow a scheduler daemon to obtain multiple claims against a p-slot (by enabling CLAIM_PARTITIONABLE_LEFTOVERS).   This is also referred to as "scheduler splitting."
+
+The chief disadvantage of (1) is that it limits the rate at which a pool's partitionable resources can be allocated, particularly if the interval between negotiation cycles is long.  For example, if each p-slot can support 10 jobs, it will require 10 negotiator cycles to fully allocate all the slots resources to those jobs.  If the negotiation interval is set to 5 minutes, that means it might take _nearly an hour_ (50 min) to fully utilize the pool's resources.
+
+Using mechanism (2) -- scheduler splitting -- substantially increases the potential rate at which p-slot resources can be allocated, because the scheduler can obtain multiple claims from the slot in a faster loop that isn't rate limited by the negotiation interval.
+
+There are limitations to scheduler splitting.  The first is that, unlike the negotiator, the scheduler does not have access to the Concurrency Limit accounting, and so this method cannot work when Concurrency Limits are under consideration.
+
+Additionally, jobs from any _different_ schedulers will not have access to any remaining resources until the next negotiation cycle.
+
+Lastly, in a situation where accounting group quotas and/or submitter shares are smaller than a p-slot's slot weight, the negotiator will never match a p-slot to those requests, and so the scheduler will never have access to the slot so it can execute slot splitting.  This results in the possibility of accounting group starvation.
+
+{subsubsection: Consumption Policies}
+Consumption Policies address the limitations described above in the following ways.
+
+*Fast resource allocation:* When the HTCondor negotiator matches a job against a partitionable slot configured with a Consumption Policy, it will deduct the resource assets (cpu, memory, etc) from that p-slot and keep it in the list.  Therefore, a p-slot can be matched against multiple jobs in the same negotiation cycle.  This allows p-slots to be fully loaded in a single cycle, instead of matching a single job per cycle.  Because this matching happens in the negotiator, it may also be referred to as "negotiator splitting"
+
+*Concurrency Limits:* The negotiator has access to all Concurrency Limit accounting, and so negotiator splitting via Consumption Policies works properly with all Concurrency Limits.
+
+*Multiple schedulers:* Because the negotiator has access to jobs from all schedulers, Consumption Policies allow a partitionable slot to service jobs from multiple schedulers in a single negotiation cycle.
+
+*Accounting Group Quotas:* The cost of matching a job against a slot is traditionally the value of the SlotWeight expression.  In a scenario where the slot weights of available p-slots are greater than an accounting group's quota, the jobs in that accounting group will be starved.  This kind of scenario becomes increasingly likely in a fine-grained accounting group configuration involving many smaller quotas, or when machines with larger amounts of resources and correspondingly large slot weights are in play.
+
+When a p-slot with a Consumption Policy is matched then its match-cost is the _change_ in the SlotWeight value, from before the match to after.  This means that a match is _only charged for the portion of the p-slot that it actually used_ (as measured by the SlotWeight expression), and so p-slots with large SlotWeight values can generally be used by accounting groups with smaller quotas (and likewise by submitters having smaller fairshare values).
+
+{section: Consumption Policies and Accounting}
+As was mentioned above, the match cost for a job against a slot with a Consumption Policy is the _change_ in the SlotWeight value from before the match to after.  It is this match cost value that is added to the corresponding submitter's usage in the HTCondor accountant.  A useful way to think about accounting with Consumption Policies is that the standard role of SlotWeight is replaced with _change_ in SlotWeight.
+
+{section: Heterogeneous Resource Models}
 Consumption Policies are configurable on a per-slot basis, which makes it straightforward to support multiple resource consumption models on a single pool.
 
 For example, the following is a cpu-centric resource consumption policy:
@@ -42,20 +73,6 @@
 
 Note that the slot weight expression is typically configured to correspond to the "most limiting" resource, and furthermore behaves as a _measure of the number of potential matches remaining on the partitionable slot_.
 
-{subsubsection: Fast Slot Loading}
-When the HTCondor negotiator matches a job against a partitionable slot configured with a Consumption Policy, it will deduct the resource assets (cpu, memory, etc) from that p-slot and keep it in the list.  Therefore, a p-slot can be matched against multiple jobs in the same negotiation cycle.  This allows p-slots to be fully loaded in a single cycle, instead of matching a single job per cycle.
-
-(Note: the CLAIM_PARTITIONABLE_LEFTOVERS feature is an alternative approach to faster p-slot loading, operating in the scheduler as opposed to the negotiator).
-
-{subsubsection: Flexible Quota Utilization}
-The cost of matching a job against a slot is traditionally the value of the SlotWeight expression.  In a scenario where the slot weights of available p-slots are greater than an accounting group's quota, the jobs in that accounting group will be starved.
-
-However, when a p-slot with a Consumption Policy is matched then its match-cost is the _change_ in the SlotWeight value, from before the match to after.  This means that a match is _only charged for the portion of the p-slot that it actually used_ (as measured by the SlotWeight expression), and so p-slots with many resources and possibly large SlotWeight values can generally be used by accounting groups with smaller quotas (and likewise by submitters having smaller fairshare values).
-
-{section: Consumption Policies and Accounting}
-As was mentioned above, the match cost for a job against a slot with a Consumption Policy is the _change_ in the SlotWeight value from before the match to after.  It is this match cost value that is added to the corresponding submitter's usage in the HTCondor accountant.  A useful way to think about accounting with Consumption Policies is that the standard role of SlotWeight is replaced with _change_ in SlotWeight.
-
-Also of note: because matching with Consumption Policies takes place in the negotiator, accounting of Concurrency Limits is also implicitly handled in the standard manner.
 
 {section: Consumption Policy Examples}
 In the preceding discussion, examples of a cpu-centric and a memory-centric Consumption Policy were provided.   A few other examples are listed here.
@@ -74,6 +91,14 @@
 SLOT_WEIGHT = Tokens
 {endcode}
 
+{subsubsection: Handle missing request attributes}
+RequestXxx attributes are not always guaranteed to be present, so Consumption Policy expressions should take this into account. For example a policy that involves Extensible Resources cannot assume jobs will be requesting such a resource.
+{code}
+# declare an extensible resource
+MACHINE_RESOURCE_actuators = 8
+# policy for actuators that interprets missing request as 'zero', which is a good idiom for extensibe resources that many jobs are likely to not use or care about
+CONSUMPTION_ACTUATORS = ifThenElse(target.RequestActuators =!= undefined, target.RequestActuators, 0)
+{endcode}
 {subsubsection: emulate a static slot}
 This example uses a consumption policy to emulate static slot behavior
 {code}
@@ -96,12 +121,3 @@
 # define slot weight as minimum of remaining-match estimate based on either cpus or memory:
 SLOT_WEIGHT = ifThenElse(Cpus < floor(Memory/256), Cpus, floor(Memory/256))
 {endcode}
-
-{subsubsection: Handle missing request attributes}
-RequestXxx attributes are not always guaranteed to be present, so Consumption Policy expressions should take this into account. For example a policy that involves Extensible Resources cannot assume jobs will be requesting such a resource.
-{code}
-# declare an extensible resource
-MACHINE_RESOURCE_actuators = 8
-# policy for actuators that interprets missing request as 'zero', which is a good idiom for extensibe resources that many jobs are likely to not use or care about
-CONSUMPTION_ACTUATORS = ifThenElse(target.RequestActuators =!= undefined, target.RequestActuators, 0)
-{endcode}