The optional contribution modules (aka "contribs") are tools or plugins operating on top of HTCondor. None of them underlie the HTCondor development cycle and the HTCondor team does not provide support for most of them (exceptions are described in the respective module). Many of the modules are hosted within HTCondor's git repository in the condor_contrib folder and are distributed with the HTCondor source release, however, they are not part of HTCondor's release binaries.

If you want to provide your own contrib module, please follow the instructions on the wiki page ProvideContribModules.

List of current optional contribution modules

Makeflow

Makeflow is a workflow manager that allows you to express complex DAGs (Directed Acyclic Graphs) in a compact Make-like syntax and run them easily on your HTCondor pool. See http://ccl.cse.nd.edu/software/makeflow/

PyDagman

PyDagman is a Python package to simplify the programmatic creation of DAG files for condor_dagman in Python. The package is born from frustration writing one-off scripts to create DAG files for the users I support. We regularly assist users in creating DAG workflows to their specifications, usually by reading in parameters from a parameter file and then programmatically building the DAG file using loops and conditionals. This package takes care of some of the more annoying aspects, such as string formatting and circular dependency checking. See https://github.com/brandentimm/pydagman

htcondor_dag.py

htcondor_dag.py turns python functions into HTCondor jobs. It writes out a DAG (Directed Acyclic Graph) defining the individual jobs and their dependencies, ready for submission to dagman which schedules their execution across a cluster of compute nodes. See https://github.com/candlerb/htcondor_dag.py

HTCondor Quill

Quill stores job history data persistently in a database and allows HTCondor tools to query the database. See: HtcondorQuill

HTCondor DBQ

HTCondor DBQ provides a relational database management system interface to HTCondor. See: HtcondorDbq

Drop And Compute

DropAndCompute, from the University of Manchester, is an approach to using network (or grid or cloud based) computational resources without having to know the operating system of the resource’s gateway or any command line tools. It provides and DropBox style user interface for job submission and management.

HTCondor Pigeon

Pigeon allows queuing and forwarding of user log messages via AMQP. It consists of a broker and client tools. See: HtcondorPigeon

HTCondor Aviary

An alternative SOAP API to Birdbath that uses WSO2 and Axis2/C. See: HtcondorAviary

CondorAgent

An alternative API to the HTCondor scheduler based on a REST interface. CondorAgent is a program that runs beside a HTCondor scheduler. It provides enhanced access to scheduler-based data and scheduler actions via a HTTP-based REST interface. CondorAgent is deployed as either a shell script wrapped Python program (which requires Python 2.4 or greater) or as a Windows binary (which does not require a local Python installation). See: https://github.com/cyclecomputing/condor-agent

HTCondor Plumage

A NoSQL operational data store framework that uses mongodb. See: CondorPlumage

HTCondor Log Analyzer

This web site allows you to upload log files generated by the HTCondor system, and get back graphics and an explanation of what happened in the system. This can aid in understanding a workload of hundreds or thousands of jobs.

HTCondor Log Viewer

Real-time visualization of events in the job event log via a Java Swing application. See: HtcondorLogViewer

HTCondor View

HTCondor View is used to automatically generate World Wide Web (WWW) pages displaying usage statistics of your HTCondor Pool. Included in the module is a shell script that invokes the condor_stats command to retrieve pool usage statistics from the HTCondor View server and generate HTML pages from the results. See HtcondorViewClient.

DMTCP/HTCondor Integration

DMTCP is a third part user space checkpointing library which, through a shim script and extra information in one's submit description file, can checkpoint vanilla universe jobs. See: DmtcpCondor

Stork

Stork is a batch scheduler specialized in data placement and data movement, which is based on the concept and ideal of making data placement a first class entity in a distributed computing environment. See: http://www.storkproject.org

Remote HTCondor

Remote HTCondor allows a user to submit and monitor batch jobs through a remote instance of HTCondor from his or her computer without having to install HTCondor locally. See: RemoteCondor

CL-MW: A Master/Slave Distributed Computing Library in Common Lisp

See: ClMw

HDFS

The Hadoop Distributed File System (HDFS) is a user space, distributed file system, maintained by the Apache project. The condor_hdfs daemon is a daemon which manages the running of the java-based hdfs daemon. See: HadoopDistributedFileSystemModule