-{section: How to shut down Condor without killing jobs}
+{section: How to shut down HTCondor without killing jobs}
 
-Known to work with Condor version: 7.0
+Known to work with HTCondor version: 7.0
 
 {subsection: How to shut down a single execute node without killing jobs}
 
@@ -32,7 +32,7 @@
 MAX_JOBS_SUBMITTED=0
 {endcode}
 
-Once all jobs have completed, turn off condor by issuing the following command from the central manager, or, depending on your security policy, from wherever and as whomever you need to be to issue administrative commands.
+Once all jobs have completed, turn off HTCondor by issuing the following command from the central manager, or, depending on your security policy, from wherever and as whomever you need to be to issue administrative commands.
 
 {code}
 condor_off -schedd -peaceful <hostname>
@@ -40,13 +40,13 @@
 
 {subsection: How to shut down a submit node after finishing all running jobs in the queue}
 
-As of Condor 7.1.1, you can do this by issuing the following command from the central manager, or, depending on your seucrity policy, from wherever and as whomever you need to be to issue administrative commands.
+As of HTCondor 7.1.1, you can do this by issuing the following command from the central manager, or, depending on your seucrity policy, from wherever and as whomever you need to be to issue administrative commands.
 
 {code}
 condor_off -schedd -peaceful <hostname>
 {endcode}
 
-In versions prior to Condor 7.1.1, you can put all idle jobs on hold and then wait for the running jobs to finish.  Run the following command as a user with administrative privileges in the queue (e.g. root).
+In versions prior to HTCondor 7.1.1, you can put all idle jobs on hold and then wait for the running jobs to finish.  Run the following command as a user with administrative privileges in the queue (e.g. root).
 
 {code}
 condor_hold -constraint 'JobStatus == 1'
@@ -66,7 +66,7 @@
 condor_off -schedd <hostname>
 {endcode}
 
-During graceful shutdown of the schedd, all running standard universe jobs are stopped and checkpointed.  All other jobs are left running (if they have a non-zero {quote: JobLeaseDuration}, which is 20 minutes by default).  The schedd gracefully disconnects from them in the hope of being able to later reconnect to the running jobs when it starts back up.  If the lease runs out before the schedd reconnects to the jobs, then they are killed.  Therefore, if you need a longer down time, you should increase the lease.  You can increase the default by adding the following to your Condor configuration:
+During graceful shutdown of the schedd, all running standard universe jobs are stopped and checkpointed.  All other jobs are left running (if they have a non-zero {quote: JobLeaseDuration}, which is 20 minutes by default).  The schedd gracefully disconnects from them in the hope of being able to later reconnect to the running jobs when it starts back up.  If the lease runs out before the schedd reconnects to the jobs, then they are killed.  Therefore, if you need a longer down time, you should increase the lease.  You can increase the default by adding the following to your HTCondor configuration:
 
 {code}
 JobLeaseDuration = 5400