+{section: Version 8.9.3 and later}
+
+Consult http://htcondor.readthedocs.io/en/latest, instead.
+
+{section: Version 8.8}
+
+Use the rest of this page as written.
+
+{section: Version 8.6}
+
+Use =+CheckpointExitBySignal=, =+CheckpointExitSignal=, and =+CheckpointExitCode=, respectively, instead of =+SuccessCheckpointExitBySignal=, =+SuccessCheckpointExitSignal=, and =+SuccessCheckpointExitCode=.
+
+----
+
 This HOWTO is about writing jobs for an executable which periodically saves checkpoint information, and how to make HTCondor store that information safely, in case it's needed to continue the job on another machine (or at another time).
 
 This HOWTO is _not_ about how to checkpoint a given executable; that's up to you or your software provider.
 
+
 {section: How To Run Self-Checkpointing Jobs}
 
-The best way to run self-checkpointing code is to set =CheckpointExitCode= in your submit file.  If the =executable= exits with =CheckpointExitCode=, HTCondor will transfer the checkpoint to the submit node, and then immediately restart the =executable= in the same sandbox on the same machine, with same the =arguments=.  This immediate transfer makes the checkpoint available for continuing the job even if the job is interrupted in a way that doesn't allow for files to be transferred (e.g., power failure), or if the file transfer doesn't complete in the time allowed.
+The best way to run self-checkpointing code is to set =+WantFTOnCheckpoint= to =TRUE= and =+SuccessCheckpointExitCode= in your submit file.  If the =executable= exits with =+SuccessCheckpointExitCode=, HTCondor will transfer the checkpoint to the submit node, and then immediately restart the =executable= in the same sandbox on the same machine, with same the =arguments=.  This immediate transfer makes the checkpoint available for continuing the job even if the job is interrupted in a way that doesn't allow for files to be transferred (e.g., power failure), or if the file transfer doesn't complete in the time allowed.
 
-Before a job can use =CheckpointExitCode=, its =executable= must meet a number of requirements.
+Before a job can use =+SuccessCheckpointExitCode=, its =executable= must meet a number of requirements.
 
 {subsection: Requirements}
 
@@ -19,7 +34,7 @@
 3:  Starting your executable up from a checkpoint is relatively quick.
 *::  If starting your executable up from a checkpoint is relatively slow, your job may not run efficiently enough to be useful, depending on the frequency of checkpoints and interruptions.
 
-{subsection: Using CheckpointExitCode}
+{subsection: Using +SuccessCheckpointExitCode}
 
 The following Python script is a toy example of code that checkpoints itself.  It counts from 0 to 10 (exclusive), sleeping for 10 seconds at each step.  It writes a checkpoint file (containing the next number) after each nap, and exits with code 77 at count 3, 6, and 9.  It exits with code 0 when complete.
 
@@ -53,7 +68,8 @@
 The following submit file commands HTCondor to transfer the file '77.ckpt' to the submit node whenever the script exits with code 77.  If interrupted, the job will resume from the most recent of those checkpoints.  Note that you _must_ include your checkpoint file(s) in =transfer_output_files=; otherwise HTCondor will not transfer it (them).
 
 {file: 77.submit}
-CheckpointExitCode          = 77
++WantFTOnCheckpoint         = TRUE
++SuccessCheckpointExitCode  = 77
 transfer_output_files       = 77.ckpt
 should_transfer_files       = yes
 
@@ -108,7 +124,7 @@
 The basic technique here is to write a wrapper script (or modify an existing one), so that the =executable= has the necessary behavior, even if the code does not.
 
 1: _Your executable exits after taking a checkpoint with an exit code it does not otherwise use._
-*:: If your code exits when it takes a checkpoint, but not with a unique code, your wrapper script will have to determine, when the executable exits, if it did so because it took a checkpoint.  If so, the wrapper script will have to exit with a unique code.  If the code could usefully exit with any code, and the wrapper script therefore can not exit with a unique code, you can instead instruct HTCondor to consider being kill by a particular signal as a sign of successful checkpoint; set =+SuccessCheckpointExitBySignal= to =TRUE= and =+SuccessCheckpointExitSignal= to the particular signal.  (If you do not set =CheckpointExitCode=, you must set =+WantFTOnCheckpoint=.)
+*:: If your code exits when it takes a checkpoint, but not with a unique code, your wrapper script will have to determine, when the executable exits, if it did so because it took a checkpoint.  If so, the wrapper script will have to exit with a unique code.  If the code could usefully exit with any code, and the wrapper script therefore can not exit with a unique code, you can instead instruct HTCondor to consider being kill by a particular signal as a sign of successful checkpoint; set =+SuccessCheckpointExitBySignal= to =TRUE= and =+SuccessCheckpointExitSignal= to the particular signal.
 *:: If your code does not exit when it takes a checkpoint, the wrapper script will have to determine when a checkpoint has been made, kill the program, and then exit with a unique code.
 2: _When restarted, your executable determines on its own if a checkpoint is available, and if so, uses it._
 *:: If your code requires different arguments to start from a checkpoint, the wrapper script must check for the presence of a checkpoint and start the executable with correspondingly modified arguments.
@@ -125,7 +141,7 @@
 
 The basic idea is to take checkpoints as the job runs, but not transfer them back to the submit node until the job is evicted.  This implies that your =executable= doesn't exit until the job is complete (which is the normal case).  If your code has long start-up delays, you'll naturally not want it to exit after it writes a checkpoint; otherwise, the wrapper script could restart the code as necessary.
 
-To use this method, set =when_to_transfer_output= to =ON_EXIT_OR_EVICT= instead of setting =CheckpointExitCode=.  This will cause HTCondor to transfer your checkpoint file(s) (which you listed in =transfer_output_files=, as noted above) when the job is evicted.  Of course, since this is the only time your checkpoint file(s) will be transferred, if the transfer fails, your job has to start over from the beginning.  One reason file transfer on eviction fails is if it takes too long, so this method may not work if your =transfer_output_files= contain too much data.
+To use this method, set =when_to_transfer_output= to =ON_EXIT_OR_EVICT= instead of setting =+SuccessCheckpointExitCode=.  This will cause HTCondor to transfer your checkpoint file(s) (which you listed in =transfer_output_files=, as noted above) when the job is evicted.  Of course, since this is the only time your checkpoint file(s) will be transferred, if the transfer fails, your job has to start over from the beginning.  One reason file transfer on eviction fails is if it takes too long, so this method may not work if your =transfer_output_files= contain too much data.
 
 Furthermore, eviction can happen at any time, including while the code is updating its checkpoint file(s).  If the code does not update its checkpoint file(s) atomically, HTCondor will transfer the partially-updated checkpoint file(s), potentially overwriting the previous, complete one(s); this will probably prevent the code from picking up where it left off.
 
@@ -133,23 +149,19 @@
 
 {subsubsection: Manual Transfers}
 
-If you're comfortable with programming, instead of running a job with =CheckpointExitCode=, you could use =condor_chirp=, or other tools, to manage your checkpoint file(s).  Your =executable= would be responsible for downloading the checkpoint file(s) on start-up, and periodically uploading the checkpoint file(s) during execution.  We don't recommend you do this for the same reasons we recommend against managing your own input and output file transfers.
+If you're comfortable with programming, instead of running a job with =+SuccessCheckpointExitCode=, you could use =condor_chirp=, or other tools, to manage your checkpoint file(s).  Your =executable= would be responsible for downloading the checkpoint file(s) on start-up, and periodically uploading the checkpoint file(s) during execution.  We don't recommend you do this for the same reasons we recommend against managing your own input and output file transfers.
 
 {subsubsection: Early Checkpoint Exits}
 
-
-
 If your executable's natural checkpoint interval is half or more of your pool's max job runtime, it may make sense to checkpoint and then immediately ask to be rescheduled, rather than lower your user priority doing work you know will be thrown away.  In this case, you can use the =OnExitRemove= job attribute to determine if your job should be rescheduled after exiting.  Don't set =ON_EXIT_OR_EVICT=, and don't set =+WantFTOnCheckpoint=; just have the job exit with a unique code after its checkpoint.
 
-
-
 {subsection: Signals}
 
 Signals offer additional options for running self-checkpointing jobs.  If you're not familiar with signals, this section may not make sense to you.
 
 {subsubsection: Periodic Signals}
 
-HTCondor supports transferring checkpoint file(s) for =executable=s which take a checkpoint when sent a particular signal, if the =executable= then exits in a unique way.  Set =+WantCheckpointSignal= to =TRUE= to periodically receive checkpoint signals, and =+CheckpointSig= to specify which one.  (The interval is specified by the administrator of the execute machine.)  The unique way be a specific exit code, for which you would set =CheckpointExitCode=, or a signal, for which you would set =+SuccessCheckpointExitBySignal= to =TRUE= and =+SuccessCheckpointExitSignal= to the particular signal.  (If you do not set =CheckpointExitCode=, you must set =+WantFTOnCheckpoint=.)
+HTCondor supports transferring checkpoint file(s) for =executable=s which take a checkpoint when sent a particular signal, if the =executable= then exits in a unique way.  Set =+WantCheckpointSignal= to =TRUE= to periodically receive checkpoint signals, and =+CheckpointSig= to specify which one.  (The interval is specified by the administrator of the execute machine.)  The unique way be a specific exit code, for which you would set =+SuccessCheckpointExitCode=, or a signal, for which you would set =+SuccessCheckpointExitBySignal= to =TRUE= and =+SuccessCheckpointExitSignal= to the particular signal.
 
 {subsubsection: Delayed Transfer with Signals}