-This document describes an HTCondor configuration known to scale to 5000 EC2 nodes.  It may also assist in constructing other pools across a wide-area (or otherwise high-latency) network.  We present the configuration as a series scalability problems and solutions, in order of severity.
+This document describes an HTCondor configuration known to scale to 5000 EC2 nodes.  (It may also assist in constructing other pools across a wide-area, or otherwise high-latency, network.)  We present the configuration as a series scalability problems and solutions, in order of severity.  This document generally refers to the 7.8 and 7.9 release series.
 
 Before we begin, however, a few recommendations (especially for those readers constructing new pools):
 
@@ -12,7 +12,7 @@
 
 However, an HTCondor pool can use more than a single collector.  Instead, a number of secondary collectors communicate with the startds and forward the results to the primary collector, which all the other HTCondor tools and daemons use (as normal).  This allows HTCondor to perform multiple simultaneous I/O operations (and overlap them with computation) by using the operating system's whole-process scheduler.
 
-To implement this tiered collector set up, follow {wiki: HowToConfigCollectors this recipe}.  If you're already using the shared port service, you must disable it for the collector.  The easiest way to do this is to turn it off entirely on the central manager (which should not be the same machine as the submit node); it may also explicitly disabled for the collector by setting *COLLECTOR.USE_SHARED_PORT* to *FALSE*.
+To implement this tiered collector set up, follow {wiki: HowToConfigCollectors this recipe}.  If you're already using the shared port service, you should disable it for the collector; it can cause multiple problems.  The easiest way to do this is to turn it off entirely on the central manager (which should not be the same machine as the submit node); it may also explicitly disabled for the collector by setting *COLLECTOR.USE_SHARED_PORT* to *FALSE*.
 
 To reduce confusion, you may also want to configure your execute nodes so that all of its HTCondor daemons connect to the same secondary collector.  (This has the added benefit that reconfiguring an execute node won't change to which collector it reports.)  One of way of doing this is by randomly choosing the *COLLECTOR_HOST* during the boot process.  If you don't set it in your other configuration files, you can simply create a file in the config.d directory (specified by *LOCAL_CONFIG_DIR*, usually /etc/condor/config.d) which sets it.
 
@@ -26,7 +26,7 @@
 
 Using the shared port service may mean that the scaling limit of this configuration is RAM on the submit node: each running job requires a shadow, which in the latest version of HTCondor, uses roughly 1200KB.  (Or about 950KB for a 32-bit installation of HTCondor.)
 
-To use the shared port service, set *USE_SHARED_PORT* to *TRUE* on the submit and execute nodes.  Do not use the shared port service on the central manager; this can cause multiple problems.  You must also add the shared port service to the *DAEMON_LIST*, for instance by setting it to *$(DAEMON_LIST), SHARED_PORT*.
+To use the shared port service, set *USE_SHARED_PORT* to *TRUE* on the submit and execute nodes.  You should not use the shared port service on the central manager; this can cause multiple problems.  You must also add the shared port service to the *DAEMON_LIST*, for instance by setting it to *$(DAEMON_LIST), SHARED_PORT*.
 
 To enable CCB, set *CCB_ADDRESS* to *$(COLLECTOR_HOST)* on the execute nodes.  Do not enable CCB on the central manager (for the collector); this can cause multiple problems, especially in conjunction with the shared port service.