{verbatim}
 condor_off -collector
-# become the user that runs condor
+# become the user that runs HTCondor
 sudo su root
 env _CONDOR_USE_CLONE_TO_CREATE_PROCESSES=False valgrind --tool=memcheck --leak-check=yes --show-reachable=yes --leak-resolution=high >& /tmp/valgrind.log < /dev/null /path/to/condor_collector -f -p 9621 &
 {endverbatim}
 
-Note that valgrind (as of 2.4.0) crashes due to the way we use clone(), so the above example disables the clone() optimization. As of 7.1.2, condor should auto-detect that it is running under valgrind and automatically disable the clone optimization.
+Note that valgrind (as of 2.4.0) crashes due to the way we use clone(), so the above example disables the clone() optimization. As of 7.1.2, HTCondor should auto-detect that it is running under valgrind and automatically disable the clone optimization.
 
 To check for leaks, run for a while and then do a graceful shutdown (kill -TERM). To check for bloat, kill with SIGINT instead. This will prevent it from doing a normal exit, freeing up memory, etc. That way, we can see memory that it has referenced (but which may be unexpectedly bloated). The valgrind output will contain information on the blocks of memory that were allocated at exit.
 
@@ -37,7 +37,7 @@
 {section: Running the test suite under valgrind}
 
 =batch_test.pl= is the means by which the test suite is run.  It can
-be told to start up its own person condor and use that for testing.
+be told to start up its own person HTCondor and use that for testing.
 The tests ran by =batch_test.pl= may also start their own personal
 condors.
 
@@ -65,7 +65,7 @@
 {endverbatim}
 
 Then, in the same directory as the pile of =tests.*= files, run this script,
-which will collate the =tests.*= files associated with Condor daemons and
+which will collate the =tests.*= files associated with HTCondor daemons and
 tools into directories.
 
 {code}
@@ -107,10 +107,10 @@
         # make sure I have something to work with.
         next if (!defined($pieces[2]));
 
-        # get rid of an easy set of commands that aren't condor processes.
+        # get rid of an easy set of commands that aren't HTCondor processes.
         next if ($pieces[2] !~ m:condor_\w+$:);
 
-        # get rid of any condor process that appears to be a test suite
+        # get rid of any HTCondor process that appears to be a test suite
         # executable
         next if ($pieces[2] =~ m:condor_exec:);
 
@@ -149,7 +149,7 @@
 # from the central manager
 condor_off -startd
 # back on the startd node
-# become the user who runs condor
+# become the user who runs HTCondor
 ksu
 env LD_PRELOAD=/p/condor/workspaces/danb/google-perftools-0.98-rh5/lib/libtcmalloc.so HEAPPROFILE=/tmp/startd.hprof HEAP_PROFILE_ALLOCATION_INTERVAL=5000000  /unsup/condor/sbin/condor_startd -f >& /tmp/startd.hprof.log < /dev/null &
 {endverbatim}
@@ -171,7 +171,7 @@
 
 http://koji.hep.caltech.edu/koji/buildinfo?buildID=557.
 
-Condor isn't really performance critical, but let's use it for tracking leaks.  Igprof normally tracks a process from beginning to end, and dumps a profile at process exit.  Instead, we'll use it to monitor the Condor tree and dump a periodic heap dump.  A useful invocation of condor_master under igprof follows:
+HTCondor isn't really performance critical, but let's use it for tracking leaks.  Igprof normally tracks a process from beginning to end, and dumps a profile at process exit.  Instead, we'll use it to monitor the HTCondor tree and dump a periodic heap dump.  A useful invocation of condor_master under igprof follows:
 
 {verbatim}
 igprof -D /var/log/condor/dump_profile -mp condor_master
@@ -189,7 +189,7 @@
 igprof -D /var/log/condor/dump_profile -t condor_schedd -mp condor_master
 {endverbatim}
 
-Be careful about the dump_profile file: the condor process will attempt to remove it after dumping the profile; if it is owned by root and the process runs as user condor, the removal will fail and the heap dump will occur again 1/3 second later.
+Be careful about the dump_profile file: the HTCondor process will attempt to remove it after dumping the profile; if it is owned by root and the process runs as user condor, the removal will fail and the heap dump will occur again 1/3 second later.
 
 If I'm tracking a slow leak, I setup a cron job to do a periodic dump: