Page History

Turn Off History

How to fill a pool Breadth First

Some pool administrators prefer a policy where, when there are fewer jobs than total cores in their pool, those jobs are "spread out" as much as possible, so that each machine is running the fewest number of jobs. If each machine is identical, such a policy may result in better performance than one in which each machine is "filled up" before assigning jobs to the next machine, but may use more power to do so.

Such a breadth first policy is relatively easy to implement when the pool uses only static slots, but is also possible to implement with dynamic, or partitionable slots.

In both cases, the main idea is to set the NEGOTIATOR_PRE_JOB_RANK expression in the negotiator to prefer to give to the schedds machines that are already running the fewest numbers of jobs. We use NEGOTIATOR_PRE_JOB_RANK instead of NEGOTIATOR_POST_JOB_RANK, so that the job's RANK expression doesn't come into play. If you trust your users to override this policy, you could use NEGOTIATOR_POST_JOB_RANK instead. Note that because this policy happens in the negotiator, if CLAIM_WORKLIFE is set to a high value, the schedds are free to reuse the slots they have been assigned for some time, which may cause the policy to be out of balance for the CLAIM_WORKLIFE duration.

Negotiator config settings for static slots

NEGOTIATOR_PRE_JOB_RANK = isUndefined(RemoteOwner) * (- SlotId)

Changing this will require a condor_reconfig of the negotiator to take effect.

For a pool with partitionable slots

NEGOTIATOR_PRE_JOB_RANK = isUndefined(RemoteOwner) * (- NumDynamicSlots)