*: Each running job has a condor_shadow process, which requires an additional ~500k RAM.  (Disclaimer: we have some reports that in different environments/configurations, this requirement can be inflated by a factor of 2.)  32-bit Linux may run out of kernel memory even if there is free "high" memory available.  In our experience, with Condor 7.3.0, a 32-bit dedicated submit machine cannot run more than 10,000 jobs simultaneously, because of kernel memory constraints.
 
-*: Each running job requires two, occasionally three, network ports on the submit machine.  In 2.6 Linux, the ephemeral port range is typically 32768 through 61000, so from a single submit machine this limits you to 14000 simultaneously running jobs (more realistically 12000 in my experience).  In Linux, you can increase the ephemeral port range via /proc/sys/net/ipv4/ip_local_port_range.
-
+*: Each vanilla job requires two, occasionally three, network ports on the submit machine.  Standard universe jobs require 5.  In 2.6 Linux, the ephemeral port range is typically 32768 through 61000, so from a single submit machine this limits you to 14000 simultaneously running vanilla jobs.  In Linux, you can increase the ephemeral port range via /proc/sys/net/ipv4/ip_local_port_range.  Note that short-running jobs may require more ports, because a non-negligible number of ports will be consumed in the temporary TIME_WAIT state.  For this reason, the condor manual conservatively recommends 5 * running jobs.  Fortunately, as of Condor 7.5.3, the TIME_WAIT issue with short running jobs is largely gone, due to SHADOW_WORKLIFE.  Also, as of Condor 7.5.0, condor_shared_port can be used to reduce port usage even further.  Port usage per running job is negligible if CCB is used to access the execute nodes; otherwise it is 1 (outgoing) port/job.
 
 Example calculations: