For example, if your job is ID 635.0, and is logging to the file 'job.log', you can copy the files in the checkpoint to a subdirectory of the current as follows:
 {code}
 condor_vacate_job 635.0
+
 # Wait for the job to finish being evicted; hit CTRL-C when you see 'Job was evicted.'
 tail --follow job.log
 condor_hold 635.0
+
 # Copy the checkpoint files from the spool.
 # Note that _condor_stderr and _condor_stdout are the files corresponding to the job's
 # output and error submit commands; they aren't named correctly until the the job finishes.
 cp -a `condor_config_val SPOOL`/635/0/cluster635.proc0.subproc0 .
 # Then examine the checkpoint files to see if they look right.
-# ...
+
 # When you're done, release the job to see if it actually works right.
 condor_release 635.0
 condor_ssh_to_job 635.0
@@ -116,11 +118,11 @@
 
 Future version of HTCondor may remove the requirement for job to set =when_to_transfer_output= to =ON_EXIT_OR_EVICT=.  Doing so would relax this requirement; the job would only have to ensure that its checkpoint was complete and consistent (if stored in multiple files) when it exited.  (HTCondor does not partially update the sandbox stored in spool: either every file succesfully transfers back, or none of them do.)
 
-Future versions of HTCondor may provide for explicit coordination between the job and HTCondor.  Modifying a job to explicitly coordinate with HTCondor would substantially alter the expectations.
+Future versions of HTCondor may provide for explicit coordination between the job and HTCondor.  Modifying a job to explicitly coordinate with HTCondor would substantially alter the assumptions.
 
 {subsection: Other Options}
 
-The other sections of this HOWTO explain how a job meeting this HOWTO's assumptions can take checkpoints at arbitrary intervals and transfer them back to the submit node.  Although this is the method of operation most likely to result in an interrupted job continuing from a valid checkpoint, other, less reliable options exist.
+The preceding sections of this HOWTO explain how a job meeting this HOWTO's assumptions can take checkpoints at arbitrary intervals and transfer them back to the submit node.  Although this is the method of operation most likely to result in an interrupted job continuing from a valid checkpoint, other, less reliable options exist.
 
 {subsubsection: Delayed and Manual Transfers}
 
@@ -134,7 +136,7 @@
 
 {subsubsection: Reactive Checkpoints}
 
-Instead of taking a checkpoint at some interval, it is possible, for some specific interruptions, to instead take a checkpoint when interrupted.  Specifically, if your execution resources are generally reliable, and your job's checkpoints both quick to take and small, your job may be able to generate a checkpoint, and transfer it back to the submit node, at the time your job is preempted.  This works like the previous section, except that you set =when_to_transfer_output= to =ON_EXIT_OR_EVICT= and =KillSig= to the particular signal, and the signal is only sent when your job is preempted.  The administrator of the execute machine determines the maximum amount of time is allowed to run after receiving its =KillSig=; a job may request a longer delay than the machine's default by setting =JobMaxVacateTime= (but this will be capped by the administrator's setting).
+Instead of taking a checkpoint at some interval, it is possible, for some types of interruption, to instead take a checkpoint when interrupted.  Specifically, if your execution resources are generally reliable, and your job's checkpoints both quick to take and small, your job may be able to generate a checkpoint, and transfer it back to the submit node, at the time your job is evicted.  This works like the previous section, except that you set =when_to_transfer_output= to =ON_EXIT_OR_EVICT= and =KillSig= to the particular signal (that causes your job to checkpoint), and the signal is only sent when your job is preempted.  The administrator of the execute machine determines the maximum amount of time is allowed to run after receiving its =KillSig=; a job may request a longer delay than the machine's default by setting =JobMaxVacateTime= (but this will be capped by the administrator's setting).
 
 You should probably only use this method of operation if your job runs on an HTCondor pool too old to support =+WantFTOnCheckpoint=, or the pool administrator has disallowed use of the feature (because it can be resource-intensive).