{section: Goal}
 
-Miron is interested in what it would take to make reproducible builds of Condor in the build and test environment. These builds should be "on-demand"--perhaps eventually via a web interface--and ideally should not rely on the Condor Flightworthy infrastructure. That is, we should be able to stash what is necessary for a build, and therefore be able to do the builds independently.
+Miron is interested in what it would take to make reproducible builds of HTCondor in the build and test environment. These builds should be "on-demand"--perhaps eventually via a web interface--and ideally should not rely on the HTCondor Flightworthy infrastructure. That is, we should be able to stash what is necessary for a build, and therefore be able to do the builds independently.
 
 {section: Summary of steps we know we need to take}
 
@@ -8,11 +8,11 @@
 
 1: Provide a repository that associates the source code and externals with a given version.
 
-1: Update the Condor build script so that it (optionally) does not fetch externals from where they are now--either they are provided to the build, or it might fetch them from the repository. (This step is technically optional, but allows for the builds to happen without relying on an external service.)
+1: Update the HTCondor build script so that it (optionally) does not fetch externals from where they are now--either they are provided to the build, or it might fetch them from the repository. (This step is technically optional, but allows for the builds to happen without relying on an external service.)
 
 1: Provide a command-line interface to request a build. Input includes:
 
-*:: Condor version
+*:: HTCondor version
 *:: Platform (OS & architecture, such as x86_rhap_5)
 *:: Desired packaging (tarball, RPM, Deb, Windows installer. Not all of them are available on all platforms.)
 
@@ -22,11 +22,11 @@
 
 {section: Current Situation}
 
-Today, builds get input files from a variety of locations.  For the Condor builds these include the git source code repository for the Condor source code and the Condor web site for the external packages Condor depends on.  Reproducing any given build requires that both git and the web site be available and complete. If an old external was deleted from the web site because Condor perceived it as no longer necessary, builds needing that external would not work.
+Today, builds get input files from a variety of locations.  For the HTCondor builds these include the git source code repository for the HTCondor source code and the HTCondor web site for the external packages HTCondor depends on.  Reproducing any given build requires that both git and the web site be available and complete. If an old external was deleted from the web site because HTCondor perceived it as no longer necessary, builds needing that external would not work.
 
-The Condor source and documentation are pulled together on an NMI submit node.  *The externals are pulled on each individual build machine on an as needed basis, completely outside of NMI management.*  If the externals cache is being used, the input file will never be loaded.
+The HTCondor source and documentation are pulled together on an NMI submit node.  *The externals are pulled on each individual build machine on an as needed basis, completely outside of NMI management.*  If the externals cache is being used, the input file will never be loaded.
 
-Because each machine pulls together externals on its own, it's hard to say how much disk space a given build requires.  The Condor source is about 20MB while the documentation is about 1MB.  The externals, excluding glibc, are about 107MB; a given build may not use all of the externals, but different builds typically share externals.  glibc is complex, only platforms supporting the standard universe include it, and different platforms may use different glibc.  A given glibc source is about 163MB, and we might use 2 or 3 across all of our builds.  This gives a total size of about 20+1+107+(163*3)= 617MB.  These are all compressed tarballs, further compression is unlikely to provide significant savings.  Assuming we archive two builds per night (one stable, one developer), we're looking at about 440GB per year of storage.  Assuming a single backup, we're close to needing 1TB per year.  These numbers ignore developer builds.
+Because each machine pulls together externals on its own, it's hard to say how much disk space a given build requires.  The HTCondor source is about 20MB while the documentation is about 1MB.  The externals, excluding glibc, are about 107MB; a given build may not use all of the externals, but different builds typically share externals.  glibc is complex, only platforms supporting the standard universe include it, and different platforms may use different glibc.  A given glibc source is about 163MB, and we might use 2 or 3 across all of our builds.  This gives a total size of about 20+1+107+(163*3)= 617MB.  These are all compressed tarballs, further compression is unlikely to provide significant savings.  Assuming we archive two builds per night (one stable, one developer), we're looking at about 440GB per year of storage.  Assuming a single backup, we're close to needing 1TB per year.  These numbers ignore developer builds.
 
 {section: Open Questions}
 
@@ -38,12 +38,12 @@
 
 *: Do we maintain backups?
 
-*: Condor's full(-ish) revision history is available via Git.  Should an archive contain the entire repository?  Chews up more space, but likely more useful in event of upstream disappearing.  Something clever like a shared Git repository would theoretically work, but ignores risk that upstream repo would be corrupted, damaging our past copies.
+*: HTCondor's full(-ish) revision history is available via Git.  Should an archive contain the entire repository?  Chews up more space, but likely more useful in event of upstream disappearing.  Something clever like a shared Git repository would theoretically work, but ignores risk that upstream repo would be corrupted, damaging our past copies.
 
 *: What about Metronome?
-*:: As Metronome evolves, do how to we insure that the archived Condor "glue" will work with the update Metronome?
+*:: As Metronome evolves, do how to we insure that the archived HTCondor "glue" will work with the update Metronome?
 *:: Do we need to keep old Metronome versions around?
-*::: What about the Condor binaries on which Metronome runs?
+*::: What about the HTCondor binaries on which Metronome runs?
 *:: As Metronome evolves to use virtual execute machines, it becomes easier to archive off an entire build environment.
 *::: Obviously, the size of the archive grows substantially if one chooses to do this
 *::: Adds a dependency on virtual machines
@@ -56,7 +56,7 @@
 
 {section: Complications}
 
-*:Condor pulls externals directly onto build machines, completely outside of NMI's knowledge.  Logically this should be moved onto the NMI submit machine, where it can be archived easily and where duplicates can be merged.  However, this increases disk and network utilization of the submit nodes, nodes that historically have been overloaded.
+*:HTCondor pulls externals directly onto build machines, completely outside of NMI's knowledge.  Logically this should be moved onto the NMI submit machine, where it can be archived easily and where duplicates can be merged.  However, this increases disk and network utilization of the submit nodes, nodes that historically have been overloaded.
 
 {section: Plans}
 
@@ -64,9 +64,9 @@
 
 *:If we are archiving a build, we should do the build out of the archive, identically to if we were trying to rebuild from the archive in the future.  This ensures that the archive works and simplifies the code as we always follow the same code path.  This will increase disk I/O on whatever machine it is done on, likely the already heavily loaded NMI submit node.
 
-*:Proposal: Add a new machine: "Packager" which makes read-only disk space available.  A submitted job is done on the packager.  The packager collects all of the input into the archive location, then submits the "real" jobs the NMI submit point.  Initially the packager would expose the archive location as NFS or similar to the submit point, otherwise looking like a "normal" build.  This needlessly routes data through the submit node, but should work without Metronome changes. As Condor's capability to draw files from web or other inputs grows, individual execute nodes could pull files directly from the packager.
+*:Proposal: Add a new machine: "Packager" which makes read-only disk space available.  A submitted job is done on the packager.  The packager collects all of the input into the archive location, then submits the "real" jobs the NMI submit point.  Initially the packager would expose the archive location as NFS or similar to the submit point, otherwise looking like a "normal" build.  This needlessly routes data through the submit node, but should work without Metronome changes. As HTCondor's capability to draw files from web or other inputs grows, individual execute nodes could pull files directly from the packager.
 
-*:Todd M notes, "If having Metronome track what's fetched on the execute hosts would actually be helpful (note that we wouldn't want to manage the cache, since that's really Condor's job (Condor's, not Flightworthy), I could probably add a 'remote_fetch' step..."
+*:Todd M notes, "If having Metronome track what's fetched on the execute hosts would actually be helpful (note that we wouldn't want to manage the cache, since that's really HTCondor's job (HTCondor's, not Flightworthy), I could probably add a 'remote_fetch' step..."
 
 *: Another note from Todd M:
 *:: - Talked to Nick about long-term reproducibility.  Because of compiler
@@ -90,7 +90,7 @@
    you can keep the record of what broke them, and have a good place to
    start, later.)
 
-*: Other than matching up, fetching and building externals, I think the main issue is that Condor is currently stored in a git repository.  We have 2 alternatives here:
+*: Other than matching up, fetching and building externals, I think the main issue is that HTCondor is currently stored in a git repository.  We have 2 alternatives here:
 
 1:: Keep a copy of the current git source code and a snapshot of the git repo
 1:: Copy the source code of the build out, and modify the glue logic to handle the source tarball.  This should be similar to a workspace build, I think.