{section: How to manage a large HTCondor pool} Known to work with HTCondor version: 7.0 HTCondor can handle pools of 10s of thousands of execution slots and job queues of 100s of thousands of jobs. Depending on the way you deploy it, the workloads that run on it, and the other tasks that share the system with it, you may find that HTCondor's ability to 'keep up' is limited by memory, processing speed, disk bandwidth, or configurable limits. The following information should help you determine whether there is a problem, which component is suffering, and what you might be able to do about it. {subsection: Basic Guidelines for Large HTCondor Pools} 1: Upgrade to HTCondor 8 if you are still using something older than that. It has many scalability improvements. It is also a good idea to upgrade your configuration based on the defaults that ship with HTCondor 8, because it also contains updated settings that improve scalability. 1: Put central manager (collector + negotiator) on a machine with sufficient memory and 2 cpus/cores primarily dedicated to this service. 1: If convenient, take advantage of the fact that you can use multiple submit machines. At least dedicate one machine as a submit machine with few or no other duties. 1: Under UNIX, increase the number of jobs that a schedd will run simultaneously if you have enough disk bandwidth and memory (see rough estimates in next section). As of HTCondor 7.4.0, the default setting for MAX_JOBS_RUNNING should be a reasonable formula that scales with the amount of memory available. In prior versions, the default was just 200. {code} MAX_JOBS_RUNNING = 2000 {endcode} 1: Under Windows, you _might_ be able to increase the maximum number of jobs running in the schedd, but only if you also increase desktop heap space adequately. The problem on Windows is that each running job has an instance of condor_shadow, which eats up desktop heap space. Typically, this heap space becomes exhausted with on the order of only ~100 jobs running. See {link: http://www.cs.wisc.edu/condor/manual/v7.0/7_4Condor_on.html#SECTION008413000000000000000 My submit machine cannot have more than 120 jobs running concurrently. Why?} in the FAQ. 1: Put a busy schedd's spool directory on a fast disk with little else using it. If you have an SSD use the JOB_QUEUE_LOG config knob to put the job_queue.log file, the schedd's database, on the SSD drive. 1: Do not put your log files on NFS or network storage, especially for very busy daemons like the Schedd and Shadows. 1: If running a lot of big standard universe jobs, set up multiple checkpoint servers, rather than doing all checkpointing onto the submit node. 1: If you are not using strong security (i.e. just host IP authorization) in your HTCondor pool, then you can turn off security negotiation to reduce overhead: {code} SEC_DEFAULT_NEGOTIATION = OPTIONAL {endcode} 1: If you are not using condor_history (or any other means of reading the job history, including history in Quill), turn it off to reduce overhead: {code} HISTORY = {endcode} 1: If you do not allow preemption by user priority or machine rank expression in your pool (i.e. not just preventing job killing with MaxJobRetirementTime, but completely disallowing claims from being preempted), then you can reduce overhead in the negotiator: {code} NEGOTIATOR_CONSIDER_PREEMPTION = False {endcode} 1: Some general linux scalability tuning advice may be found {wiki: LinuxTuning here}. {subsection: Rough Estimations of System Requirements} *: Collector requires a minimum of ~10k RAM per batch slot. Ditto for the negotiator. *: Schedd requires a minimum of ~10k RAM per job in the job queue. For jobs with huge environment values or other big ClassAd attributes, the requirements are larger. *: Each running job has a condor_shadow process, which requires an additional ~500k RAM. (Disclaimer: we have some reports that in different environments/configurations, this requirement can be inflated by a factor of 2.) 32-bit Linux may run out of kernel memory even if there is free "high" memory available. In our experience, with HTCondor 7.3.0, a 32-bit dedicated submit machine cannot run more than 10,000 jobs simultaneously, because of kernel memory constraints. *: Each shadow processes consumes on the order of 50 file descriptors, mainly because of shared libraries. You may need to dramatically increase the number of file descriptors available to the system or to a user, in order to run a lot of jobs. It is not unusual to configure a large memory system with a million file descriptors. *: Each vanilla job requires two, occasionally three, network ports on the submit machine. Standard universe jobs require 5. In 2.6 Linux, the ephemeral port range is typically 32768 through 61000, so from a single submit machine this limits you to 14000 simultaneously running vanilla jobs. In Linux, you can increase the ephemeral port range via /proc/sys/net/ipv4/ip_local_port_range. Note that short-running jobs may require more ports, because a non-negligible number of ports will be consumed in the temporary TIME_WAIT state. For this reason, the HTCondor manual conservatively recommends 5 * running jobs. Fortunately, as of HTCondor 7.5.3, the TIME_WAIT issue with short running jobs is largely gone, due to SHADOW_WORKLIFE. Also, as of HTCondor 7.5.0, condor_shared_port can be used to reduce port usage even further. Port usage per running job is negligible if CCB is used to access the execute nodes; otherwise it is 1 (outgoing) port/job. Example calculations: Absolute minimum RAM requirements for schedd with up to 10,000 jobs in the queue and up to 2,000 jobs running: {code}10000*0.01MB + 2000*0.5MB = 1.1GB{endcode} Absolute minimum RAM requirements for central manager machine (collector+negotiator) with 5000 batch slots: {code}2*5000*0.01MB = 100MB{endcode} Realistically, you will want to add in at least another 500MB or so to the above numbers. And if you do have other processes running on your submit or central manager machines, you will need extra resources for those. _Also remember to provision a fast dedicated disk for the spool directory of very busy schedds._ {subsection: Monitoring Health of the Submit Machine} To see whether you are suffering from timeout tuning problems, search for "timeout reading" or "timeout writing" in your ShadowLog. As a general indicator of health on a submit node, you can summarize the condor_shadow exit codes with a command like this: {code} $ grep 'EXITING WITH STATUS' ShadowLog | cut -d " " -f 8- | sort | uniq -c 12099 EXITING WITH STATUS 100 81 EXITING WITH STATUS 102 2965 EXITING WITH STATUS 107 239 EXITING WITH STATUS 108 332 EXITING WITH STATUS 112 {endcode} Meaning of common exit codes: *100* - success (you want to see a majority of these and/or 115) *101* - evicted (checkpointed) *102* - job killed *103* - job dumped core *104* - job exited with an exception *105* - shadow out of memory *107* - job evicted without checkpointing OR communication with starter failed, so requeue/reconnect job *108* - failed to activate claim to startd (e.g. failed to connect) *111* - failed to find checkpoint file *112* - job should be put on hold; see HoldReason, HoldReasonCode, and HoldReasonSubCode to understand why *113* - job should be removed (e.g. periodic_remove) *114* - job missed its deferral time (e.g. cron job) *115* - same as 100, but claim to machine is closing {subsection: Monitoring Health of the Negotiator} Check the duration of the negotiation cycle: {code} $ grep "Negotiation Cycle" NegotiatorLog 5/3 07:37:35 ---------- Started Negotiation Cycle ---------- 5/3 07:39:41 ---------- Finished Negotiation Cycle ---------- 5/3 07:44:41 ---------- Started Negotiation Cycle ---------- 5/3 07:46:59 ---------- Finished Negotiation Cycle ---------- {endcode} If the cycle is taking long (e.g. longer than 5 minutes), then see if it is spending a lot of time on a particular user: {code} $ grep "Negotiating with" NegotiatorLog 5/3 07:53:12 Negotiating with osg_samgrid@hep.wisc.edu at ... 5/3 07:53:13 Negotiating with jherschleb@lmcg.wisc.edu at ... 5/3 07:53:13 Negotiating with camiller@che.wisc.edu at ... 5/3 07:53:14 Negotiating with malshe@cae.wisc.edu at ... {endcode} If a particular user is consuming a lot of time in the negotiator (e.g. job after job being rejected), then look at how well that user's jobs are getting "auto clustered". This auto clustering happens, for the most part, behind the scenes and helps improve the efficiency of negotiation by grouping equivalent jobs together. You can see how the jobs are getting grouped together by looking at the job attribute AutoClusterID. Example: {code} $ condor_q -constraint 'AutoClusterId =!= undefined' -f "%s" AutoClusterID -f " %s" ClusterID -f ".%s\n" ProcID 1 649884.0 1 649885.0 50 650082.0 {endcode} Jobs with the same AutoClusterID are in the same group for negotiation purposes. If you see that many small groups are being created, take a look at the attribute AutoClusterAttrs. This will tell you what attributes are being used to group jobs together. All jobs in a group have identical values for these attributes. In some cases, it may be necessary to tweak the way a particular attribute is being rounded. See {link: http://www.cs.wisc.edu/condor/manual/v7.0/3_3Configuration.html#14142 SCHEDD_ROUND_ATTR} in the manual for more information on that. To protect the negotiator against one user consuming large amounts of time, you can also configure NEGOTIATOR_MAX_TIME_PER_SUBMITTER. Example: {code} NEGOTIATOR_MAX_TIME_PER_SUBMITTER = 360 {endcode} {subsection: Monitoring Health of the Collector} If the collector can't keep up with the ClassAd updates that it is receiving from the HTCondor daemons in the pool, and you are using UDP updates (the default) then it will "drop" updates. The consequence of dropped updates is stale information about the state of the pool and possibly machines appearing to be missing from the pool (depending on how many successive updates are lost). If you are using TCP updates and the collector cannot keep up, then HTCondor daemons (e.g. startds) may block/timeout when trying to send udpates. A simple way to see if you have a serious problem with dropped updates is to observe the total number of machines in the pool, from the point of view of the collector ({code}condor_status -total{endcode}). If this number drops down to less than it should be, and the missing machines are running HTCondor and otherwise working fine, then the problem may be dropped updates. A more direct way to see if your collector is dropping ClassAd updates is to use the tool {link: http://www.cs.wisc.edu/condor/manual/v7.0/condor_updates_stats.html condor_updates_stats}. Example: {code} condor_status -l | condor_updates_stats *** Name/Machine = 'vm4@...' MyType = 'Machine' *** Type: Main Stats: Total=713, Seq=712, Lost=3 (0.42%) 0: Ok ... 127: Ok {endcode} If your problem is simply that UDP updates are coming in uneven bursts, then the solution is to provide enough UDP buffer space. You can see whether this is the problem by watching the receive queue on the collector's UDP port (visible through netstat -l under unix). If it fills up now and then but is otherwise empty, then increasing the buffer size should help. However, the default in current versions of HTCondor is 10MB, which is adequate for most large pools that we have seen. Example: {code} # 20MB COLLECTOR_SOCKET_BUFSIZE = 20480000 {endcode} See the HTCondor Manual entry for {link: http://www.cs.wisc.edu/condor/manual/v7.0/3_3Configuration.html#14586 COLLECTOR_SOCKET_BUFSIZE} for additional information on how to make sure the OS is cooperating with the requested buffer size. If you are using strong authentication in the updates to the collector, this may add a lot of overhead and cause the collector not to scale high enough for very large pools. One way to deal with that is to have multiple collectors that each serve a portion of your execute nodes. These collectors would receive updates via strong authentication and then forward the updates to another main collector. An example of how to set this up is described in {wiki: HowToConfigMulti-TierCollectors How to configure multi-tier collectors}. This forwarding can be configured by using the CONDOR_VIEW_HOST configuration setting. If all else fails, you can decrease the frequency of ClassAd updates by tuning UPDATE_INTERVAL and MASTER_UPDATE_INTERVAL. One more tunable parameter in the collector is (only under unix) the maximum number of queries that the collector will try to respond to simultaneously (by creating child processes to handle each one). This is controlled by COLLECTOR_QUERY_WORKERS, which defaults to 16. {subsection: High Availability} You can set up redundant collector+negotiator instances, so if the central manager machine goes down, the pool can continue to function. All of the HAD collectors run all the time, but only one negotiator may run at a time, so the condor_had component ensures that a new instance of the negotiator is started up when the existing one dies. The main restriction is that the HAD negotiator won't help users who are flocking to the HTCondor pool. More information about HAD can be found in the {link: http://www.cs.wisc.edu/condor/manual/v7.0/3_10High_Availability.html HTCondor manual}. Tip: if you do frequent condor_status queries for monitoring, you can direct these to one of your secondary collectors in order to offload work from your primary collector.