Page History

Turn Off History

How To Run Self-Checkpointing Jobs

This HOWTO describes how to use the features of HTCondor to run self-checkpointing jobs. This HOWTO is not about to checkpoint a given job; that's up to you or your software provider.

This HOWTO focuses on using +WantFTOnCheckpoint, a submit command which causes HTCondor to do file transfer when the job checkpoints. This feature makes a number of assumptions about the job, detailed in the 'Assumptions' section. These assumptions may not be true for your job, but in many cases, you will be able to modify your job (by adding a wrapper script or altering an existing one) to make them true. If you can not, consult the 'Working Around the Assumptions' and/or 'Other Options' sections.

General Idea

The general idea of checkpointing is, of course, for your job to save its progress in such a way that it can resume its forward progress after being interrupted. Interruptions in the HTCondor system generally take three different forms: temporary (eviction and preemption), permanent (condor_rm), and recoverable failures (network outages, software faults, hardware failures, being held). Of course, no checkpoint system can recover from permanent failures, but +WantFTOnCheckpoint -- unlike much of the 'Other Options' section -- readily allows recovery from all other interruptions.

The scenario runs as follows:

  1. Your job exits after taking a checkpoint with a unique exit code.
  2. HTCondor recognizes the unique exit code and does file transfer; this file transfer presently behaves as it would if the job were being evicted/preempted. This implies that when_to_transfer_output is set to ON_EXIT_OR_EVICT.
  3. After the job's transfer_output_files are successfully sent to the submit node (and are stored in the schedd's spool directory, as normal for file transfer on eviction/preempt), HTCondor restarts the job exactly as it started it the first time.
  4. After something interrupts the job, HTCondor reschedules it as normal. As usual, HTCondor will start the job exactly as it started it the first time, but instead of starting with a fresh copy of your transfer_input_files, the sandbox will instead be copied from the transfer_output_files stored on the submit node in the previous step.

Assumptions

  1. Your job exits after taking a checkpoint with an exit code it does not otherwise use. *: If your job does not exit when it takes a checkpoint, HTCondor can not (currently) transfer its checkpoint. If your job does not exit with a unique code when it takes a checkpoint, HTCondor will transfer files and restart the job whenever the job exits with that code; if the checkpoint code and the terminal exit code are the same, your job will never finish.
  2. When restarted, your job determines on its own if a checkpoint is available, and if so, uses it. *: If your job does not look for a checkpoint each time it starts up, it will start from scratch each time; HTCondor does not run a different command line when restarting a job which has transferred a checkpoint.
  3. Starting your job up from a checkpoint is relatively quick. *: If starting your job up from a checkpoint is relatively slow, your job may not run efficiently enough to be useful, depending on the frequency of checkpoints and interruptions.
  4. Your job can not otherwise communicate with HTCondor, or it does not atomically update its checkpoint(s). *: Because interruptions may occur at any time, if your job does not update its checkpoints atomically, HTCondor may transfer a partially-updated checkpoint.

Using +WantFTOnCheckpoint

[extended example]

How Frequently to Checkpoint

FIXME

Debugging Checkpoints

FIXME (Use condor_vacate_job, condor_hold, and condor_transfer_data.)

Working Around the Assumptions

  1. If your job can be made to exit after taking a checkpoint, but does not return a unique exit code when doing so, a wrapper script for the job may be able to inspect the sandbox after the job exits and determine if a checkpoint was successfully created. If so, this wrapper script could then return a unique value. If the job can return literally any value at all, HTCondor can also regard being killed by a particular (unique) signal as a sign of a successful checkpoint; set +SuccessCheckpointExitBySignal to TRUE and +SuccessCheckpointExitSignal to the particular signal. If your job can not be made to exit after taking a checkpoint, a wrapper script may be able to determine when a successful checkpoint has been taken and kill the job itself.
  2. If your job requires different arguments to start from a checkpoint, you can wrap your job in a script which checks for the presence of a checkpoint and runs the jobs with the corresponding arguments.
  3. If it takes too long to start your job up from a checkpoint, you will need to take checkpoints less often to make quicker progress. This, of course, increases the risk of losing a substantial amount of work when your job is interrupted. See also the 'Delayed and Manual Transfers' section.
  4. If a job does not update its checkpoints atomically, you can use a wrapper script to atomically update some other file(s) and treat those as the checkpoints. More specifically, if your job writes a checkpoint incrementally to a file called 'checkpoint', but exits with code 17 when it's finished doing so, your wrapper script can check for exit code 17 and then rename 'checkpoint' to 'checkpoint.atomic'. Because rename is an atomic operation, this will prevent HTCondor from transferring a partial checkpoint file, even if the job is interrupted in the middle of taking a checkpoint. Your wrapper script will also have to copy 'checkpoint.atomic' to 'checkpoint' before starting the job, so that the job (a) uses the safe checkpoint file and (b) doesn't corrupt that checkpoint file if interrupted at a bad time. Future version of HTCondor may remove the requirement for job to set when_to_transfer_output or ON_EXIT_OR_EVICT. Doing so would relax this requirement; the job would only have to ensure that its checkpoint was complete and consistent (if stored in multiple files) when it exited. (HTCondor does not partially update the sandbox stored in spool: either every file succesfully transfers back, or none of them do.)

Future versions of HTCondor may provide for explicit coordination between the job and HTCondor. Modifying a job to explicitly coordinate with HTCondor would substantially alter the expectations.

Other Options

The other sections of this HOWTO explain how a job meeting this HOWTO's assumptions can take checkpoints at arbitrary intervals and transfer them back to the submit node. Although this is the method of operation most likely to result in an interrupted job continuing from a valid checkpoint, other, less reliable options exist.

Delayed and Manual Transfers

If your job takes checkpoints but can not exit with a unique code when it does, you have two options. The first is much simpler, but only preserves progress when your job is evicted (e.g., not when the machine shuts down or the network fails). To ensure that the checkpoint(s) a job has already taken are transferred when evicted, set when_to_transfer_output to ON_EXIT_OR_EVICT, and include the checkpoint file(s) in transfer_output_files. All the other assumptions still apply, except that quick restarts may be less important if eviction is infrequent in your pool.

If your job can determine when it has successfully taken a checkpoint, but it can not stop when it does, or doing so is too expensive, it could instead transfer its checkpoints without HTCondor's help. In most cases, this will involve using condor_chirp (by setting +WantIOProxy to TRUE and calling the condor_chirp command-line tool). Your job would be responsible for fetching its own checkpoint file(s) on start-up. (You could also create an empty checkpoint file and list it as part of transfer_input_files.)

Signals

If your job can not spontaneously exit with a unique exit code after taking a checkpoint, but can take a checkpoint when sent a particular signal and then exit in a unique way, you may set +WantCheckpointSignal to TRUE, and +CheckpointSig to the particular signal. HTCondor will send this signal to the job at interval set by the administrator of the execute machine; if the job exits as specified by the SuccessCheckpoint job attributes, its files will be transferred and the job restarted, as usual. This method should be as reliable as spona

Reactive Checkpoints

Instead of taking a checkpoint at some interval, it is possible, for some specific interruptions, to instead take a checkpoint when interrupted. Specifically, if your execution resources are generally reliable, and your job's checkpoints both quick to take and small, your job may be able to generate a checkpoint, and transfer it back to the submit node, at the time your job is preempted. This works like the previous section, except that you set when_to_transfer_output to ON_EXIT_OR_EVICT and KillSig to the particular signal, and the signal is only sent when your job is preempted. The administrator of the execute machine determines the maximum amount of time is allowed to run after receiving its KillSig; a job may request a longer delay than the machine's default by setting JobMaxVacateTime (but this will be capped by the administrator's setting).

You should probably only use this method of operation if your job runs on an HTCondor pool too old to support +WantFTOnCheckpoint, or the pool administrator has disallowed use of the feature (because it can be resource-intensive).