described below:
 *: Packaging the external so it can be built by the build system
 *:: Setting up the source tarball
+*:: Putting the source tarball on the web server
+*:: Setting up the URLS file
 *:: Making the build script
 *::: Build script environment
 *::: Build script syntax
@@ -14,16 +16,13 @@
 *: Telling the build system about the new package
 *:: Changing autoconf-related stuff
 *:: Changing imake-related stuff
-*: Telling CVS about the new external
-*:: Checking it into the externals tree (to the trunk)
-*:: Checking in Condor build system changes (to a specific branch)
-*:: Adding it to the appropriate CVS modules
-*: Pre-Building the new external in AFS(optional)
+*: Dealing with externals caches
 
 Before getting into the specifics, here's the 10,000 foot view of
 how the Condor build system deals with externals:
 
 
+*:The actual source tarball is hosted on a public web server.
 *:autoconf-generated =configure= script decides what versions
     of what externals we're trying to use for a given version of
     Condor
@@ -41,7 +40,8 @@
 *:Each external package that's being built is managed via
     =externals/build_external= which is the main script that
     handles common tasks for all externals.  This script is
-    responsible for setting up some directories, unpacking source,
+    responsible for setting up some directories, downloading and
+    unpacking the source tarball,
     maintaining a build log, and a few other things.
 *:The =externals/build_external= script invokes a
     build script that is specific to a given version of a given
@@ -101,7 +101,7 @@
 *However, if you want to add new patches to an existing external,
 you should change the version number and add a new external!  We do
 NOT want to have multiple things which are different code to be using
-the same version!*
+the same version! The new version number should include a -p<number> on the end.*
 
 Again, if you have any questions or are uncertain, just ask.
 
@@ -141,6 +141,8 @@
 =src/Imakefile= will ensure that a top-level "make" will
 rebuild zlib...
 
+The =build=, =install=, and =triggers= directories can optionally live in a separate location, where they can be shared by multiple build workspaces. On unix, this location is given by the =-with-externals= option to configure. On Windows, this location is given by the =EXTERN_DIR= environment variable. At UW-CS, /p/condor/workspaces/externals will be used if it exists.
+
 The =bundles= directory contains subdirectories for each
 kind of package, and each package directory has subdirectories for
 each version of that package.  For example, at the time of this
@@ -160,7 +162,8 @@
 
 Inside each version-specific subdirectory, are 2 main things:
 
-*:Original source tarball (ideally, exactly what you'd download
+*:=URLS= file, which contains URLs to the source tarball(s)
+(ideally, exactly what you'd download
  from the authors distribution, unmodified)
 *:Build script
 
@@ -176,19 +179,53 @@
 tarball of the external package.  We want the original, unmodified
 source whenever possible.  However, the name of the tarball is
 important, since the Condor build system itself makes assumptions
-about the name so that the =build_externals= script can untar
+about the name so that the =build_externals= script can download and untar
 the tarball for you (one less thing for your build-script to worry
 about for yourself).  So, the source tarball must be
 named "=[name]-[version].tar.gz=" (and, needless to say, it must
 be a real gzip'ed tar file).  For example, "=krb5-1.2.7.tar.gz=".
+An important exception is that the =-p<number>= at the end of the version
+is optional in the tarball name. For example, the tarball for external =krb5-1.2.7-p1= can be named =krb5-1.2.7.tar.gz=.
+
+We'll discuss what to do with the tarball later on.
+
+{subsection: Putting the source tarball on the web server}
+
+The source tarballs live on the web server parrot.cs.wisc.edu. They are synced periodically from the following directories on AFS at UW-CS:
+
+{code}
+/p/condor/repository/externals
+/p/condor/repository/externals-private
+{endcode}
+
+The latter is for files that can't be publicly distributed. Currently, the only thing there is a LIGO application for testing standard universe. Once synced, the files can be fetched from the following URLS:
+
+{code}
+http://parrot.cs.wisc.edu/externals
+http://parrot.cs.wisc.edu/externals-private
+{endcode}
+
+{subsection: Making the URLS file}
 
+The =URLS= file is a simple text file containing the URLs of the source tarballs of the external. Normally, there's only one tarball, but a couple externals require several. Each URL should appear on a separate line.
+All of the externals are hosted on parrot.cs.wisc.edu, and the URLs should look like this:
+
+{code}
+http://parrot.cs.wisc.edu/externals/krb5-1.4.3.tar.gz
+{endcode}
+
+If the tarball contains files that aren't publicly releasable, there's a restricted directory:
+
+{code}
+http://parrot.cs.wisc.edu/externals-private/krb5-1.4.3.tar.gz
+{endcode}
 
 {subsection: Making the build script}
 
 
 When the Condor build is trying to build your external, first it
 will create a temporary sandbox build directory. The
-[name]-[version].tar.gz will be untarred into the sandbox.  Then, the
+[name]-[version].tar.gz will be downloaded and untarred into the sandbox.  Then, the
 =build_[name]-[version]= script will be invoked with the
 current working directory set to the sandbox build directory.  This
 script is responsible for building the external package in whatever
@@ -206,6 +243,7 @@
 
 {code}
   $PACKAGE_NAME           # the name of this package
+  $PACKAGE_SRC_NAME       # package name from source tarball
   $PACKAGE_DEBUG          # empty if release, '-g' if debug build
   $PACKAGE_BUILD_DIR      # the sandbox build directory
   $PACKAGE_INSTALL_DIR    # where to put the results
@@ -216,6 +254,11 @@
 =$PACKAGE_NAME= is the =[name]-[version]= identifying
 string for your package.
 
+=$PACKAGE_SRC_NAME= is the same as =$PACKAGE_NAME=, except it
+will not have any =-p<number>= on the end if the source tarball
+doesn't have it. This simplifies allowing multiple external versions to use
+the same source tarball.
+
 =$PACKAGE_BUILD_DIR= is a subdirectory of
 =externals/build=, named with the =package-name=.
 This is just a temporary sandbox directory, and
@@ -322,7 +365,7 @@
 #!/bin/sh
 ############# build_generic_package
 
-cd $PACKAGE_NAME/src
+cd $PACKAGE_SRC_NAME/src
 ./configure --prefix=$PACKAGE_INSTALL_DIR --with-ccopts=$PACKAGE_DEBUG
 
 make
@@ -363,7 +406,7 @@
 
 Again, if you want to add additional patches to an existing
 external, you *MUST* make an entirely new external package with a
-different version number (e.g. something like =krb5-1.2.5.pl1=)
+different version number (e.g. like =krb5-1.2.5-p1=)
 so that we can tell the difference between the two versions.  This is
 a little wasteful of space, unfortunately, but there's no way around
 that at this time.
@@ -375,7 +418,7 @@
 
 
 Once your package is setup in the externals tree and the build
-script ready, you've got to tell the Condor build system about the new
+script is ready, you've got to tell the Condor build system about the new
 package.  There are a few separate places that this needs to happen.
 
 
@@ -434,7 +477,7 @@
 
 If you're just changing the version of an existing external, that's
 probably all you'll have to do to the =autoconf= stuff, and you
-can skip right to the discussion of =CVS= changes.
+can skip right to the discussion of =git= changes.
 However, if you're adding a whole new external package, there are a
 few more steps (both for =autoconf= and =imake=, so read
 on... In either case, before using your new external you should run
@@ -514,7 +557,7 @@
 necessary to build an external.  In general, this is only needed when
 a given external depends on a different external.  For example, the
 gahp external needs to know where the globus external was
-installed, and what "globus flavor" was built.  The gahp external also
+installed, and what "globus flavor" was built.  The blahp external also
 needs to know if the platform we're on is trying to build a
 statically-linked version of Condor or not.   So,
 =config/config.sh= defines the following variables:
@@ -615,139 +658,51 @@
 =make= inside =src= should be all you need to see your
 external package built.  Once your external build is working and the
 Condor source sees and uses the new external, you're ready to commit
-your changes to CVS...
+your changes to git.
 
 
 ----
 
-{section: Telling CVS about the new external}
-
-
-This is mostly obvious, boring stuff, and I assume you know how to
-use CVS.  I'm just including this section so that you don't forget any
-of these final steps...
-
-{subsection: Checking it into the externals tree}
-
+{section: Dealing with externals caches}
 
+The Condor build system allows built externals to be stored outside of your immediate build tree. These external caches can be shared across multiple build locations and users, greatly reducing the time to do a fresh build of Condor. This is way changing anything in how an external is built requires you to create a new version of the external.
 
-The =externals= tree lives on the trunk of the Condor CVS
-repository.  It is never branched, merged, etc.  So, all externals are
-in theory visible from all Condor CVS branches.  When you add your new
-directory into =externals/bundles/[name]/[version]= you should
-ensure that you're committing your new files to the trunk.  You should
-add the directory, then do a =cvs add= to the source tarball
-(cvs already knows files that end in .gz are binary, so you don't have
-to worry about that), the build script, and any patches you've made.
-Once that's done, you can =cvs commit= as normal.
-
-{subsection: Checking in Condor build system changes}
-
-
-
-The changes you made to =src/configure.ac=,
-=src/Imakefile= (if any), and any changes to files in
-=config= must be committed to a _specific Condor CVS branch_.
-Fundamentally, it's the fact that
-=src/configure.ac= is branched and merged with the rest of the
-Condor source that enables us to know exactly what versions of each
-external were used for a given version of Condor.  So, when you're
-committing all those changes to the build system (and to the rest of
-the Condor source to take advantage of the new external), you must do
-so to a real branch.  In fact, most of the time, you'll want to
-create a new branch off the main development branch at the time to
-deal with adding your new external.  That way, we can test building
-your new external and all related changes on all our platforms,
-without breaking the build on the main release branch at the time.
-
-
-{subsection: Adding it to the appropriate CVS modules}
-
-
-Finally (and this applies to both a new version of an existing
-external and adding a whole new kind of external), you should add your
-new external to the appropriate CVS module(s).  Even though your new
-external lives on the trunk and is therefore visible by _all_ Condor
-branches, it doesn't mean we actually _want_ to see your external
-everywhere.
-
-To solve this issue, we rely on a number of CVS modules to select
-the versions of the externals we care about on each main Condor CVS
-branch.  For example, =V6_6_EXT= holds all the externals we
-need for building the =V6_6-branch= of Condor.  So, if you
-added a new external to the =V6_7-branch=, you'd want to add
-another line to the =V6_7_EXT= CVS module.
-
-To modify a CVS module, all you have to do is this:
-
+The cache consists of the following directories:
 {code}
-  % cd /tmp
-  % cvs co CVSROOT/modules
-  % <edit> CVSROOT/modules
-  % cvs commit CVSROOT/modules
-</edit>
-{endcode}}
-
-As always, if you open up the file and look, the syntax should be
-pretty obvious.  The main thing	is that you remember to do this step
-at all, and that you add your external to the right module(s).
-
-Things might get a little tricky if you're replacing an old version
-of a given external with a new one.  In that case, be sure you don't
-break any of the version-specific historical modules
-(e.g. =V6_6_2_EXT=) when you want to remove the old version.
-In this case, =V6_6_2_EXT= is defined relative to
-=V6_6_COMMON_EXT=.  So, if you want to remove
-=externals/bundles/globus/2.2.4= from the V6_6_3_EXT module,
-you'll probably have to remove =globus/2.2.4= from
-
-=V6_6_COMMON_EXT= and manually add it back to all the
-version-specific modules that used to include it.  An example of this
-is that we changed the version of the gahp external between
-=V6_6_0= and =V6_6_1=, so we had to remove the gahp from
-=V6_6_COMMON_EXT= since the same version of it was no longer
-common to all the 6.6.x modules.
-
-----
-
-{section: Pre-Building the new external in AFS (optional)}
+externals/build
+externals/install
+externals/triggers
+{endcode}
+If you don't use an external cache, these directories will be created in your build directory.
 
+There are three situations where you'll see an external cache:
+*:Your own cache
+*:The cache in AFS at UW-CS
+*:The cache in NMI
 
-As a final (optional) step, you should probably make sure the
-pre-built externals tree in AFS
-(=/p/condor/workspaces/externals=) is up to date and that the
-new external has been pre-built on all the platforms we care about.
-All you've got to do is:
+{subsection: Your own cache}
+You can specify your own externals cache directory using the =--with-externals= command-line option to configure like so:
 
-*:Update =/p/condor/workspaces/externals=
-*:Start a build on each platform we care about
+{code}
+./configure --with-externals=/scratch/externals-cache
+{endcode}
 
-To update the externals tree, just do this:
+{subsection: The cache in AFS at UW-CS}
+The Condor team has a shared externals cache on AFS in =/p/condor/workspaces/externals=. The tree is set up with =@sys= links to separate files by platform. If you don't use the =--with-externals= option to =configure= and this directory exists, =configure= will use this cache automatically. If you don't want to use this cache, you can explicitly disable it like this:
 
 {code}
-  % cd /p/condor/workspaces
-  % cvs co externals/bundles
+./configure --with-externals=`pwd`/../externals
 {endcode}
 
-Once that's been updated, all you have to do is start a build of
-the Condor source (from the branch where you checked in your build
-system changes for the new external) on each kind of machine we care
-about.  Basically, any platform with an =@sys= directory in
-=/p/condor/workspaces/externals/sys= is what you'd need to
-worry about.
+{subsection: The cache in NMI}
+We keep an externals cache on all of the machines in the NMI Build and Test facility. By default, the Condor glue scripts for NMI don't use the cache. You can enable use of the cache with the =--use-externals-cache= option to =condor_nmi_submit=. The automated builds all do this.
 
-Just make sure the build on each platform uses
-=/p/condor/workspaces/externals= for the externals.  If you do
-not check out a local copy of externals into either your source or
-build workspaces, our =configure= script will use the tree in
-AFS by default.  Otherwise, you can always use this:
+{subsection: New externals and caching}
 
-{code}
-  % ./configure --with-externals=/p/condor/workspaces/externals
-{endcode}}
+One of the fundamental assumptions of the externals cache is that a particular external version will never change. Whiling you're preparing a new external or external version, this will not be true. Thus, you need to be careful not to pollute any shared caches with old revisions of your new external.
+
+The easiest way to do this is to not use any shared external caches. If you're using a machine at UW-CS, you can explicitly disable use of the cache in AFS. The downside to this is that you have to build all of the externals in your local build space. You can play games with symlinks to minimize this rebuilding.
+
+If you do decide to use the AFS cache, you must make sure it has the final revision of your external once you're ready to check it in. You can do so by simply removing the appropriate trigger file in =/p/condor/workspaces/externals/triggers=.
 
-Once you start the build, the externals will be built first.
-Assuming everyone else is following these directions, the only
-external that will need to be built is the one you just added.
-This will ensure that all the developers using this pre-built tree
-won't have any problems as a result of your new external.
+The externals caches in NMI are trickier. Each machine has a local cache and some platforms have multiple machines. Manually clearing the caches on all of the machines is cumbersome and error-prone. Better to never use the externals cache when developing a new external.