The optional contribution modules (aka "contribs") are tools or plugins operating on top of HTCondor. None of them underlie the HTCondor development cycle and the HTCondor team does not provide support for most of them (exceptions are described in the respective module). Many of the modules are hosted within HTCondor's git repository in the condor_contrib folder and are distributed with the HTCondor source release, however, they are not part of HTCondor's release binaries. If you want to provide your own contrib module, please follow the instructions on the wiki page ProvideContribModules. {section: List of current optional contribution modules} {subsection: Makeflow} Makeflow is a workflow manager that allows you to express complex DAGs (Directed Acyclic Graphs) in a compact Make-like syntax and run them easily on your HTCondor pool. See http://ccl.cse.nd.edu/software/makeflow/ {subsection: PyDagman} {quote: PyDagman} is a Python package to simplify the programmatic creation of DAG files for condor_dagman in Python. The package is born from frustration writing one-off scripts to create DAG files for the users I support. We regularly assist users in creating DAG workflows to their specifications, usually by reading in parameters from a parameter file and then programmatically building the DAG file using loops and conditionals. This package takes care of some of the more annoying aspects, such as string formatting and circular dependency checking. See https://github.com/brandentimm/pydagman {subsection: htcondor_dag.py} htcondor_dag.py turns python functions into HTCondor jobs. It writes out a DAG (Directed Acyclic Graph) defining the individual jobs and their dependencies, ready for submission to dagman which schedules their execution across a cluster of compute nodes. See https://github.com/candlerb/htcondor_dag.py {subsection: HTCondor Quill} Quill stores job history data persistently in a database and allows HTCondor tools to query the database. See: HtcondorQuill {subsection: HTCondor DBQ} HTCondor DBQ provides a relational database management system interface to HTCondor. See: HtcondorDbq {subsection: Drop And Compute} {link: http://www.walkingrandomly.com/?p=3339 DropAndCompute}, from the University of Manchester, is an approach to using network (or grid or cloud based) computational resources without having to know the operating system of the resource’s gateway or any command line tools. It provides and {quote:DropBox} style user interface for job submission and management. {subsection: HTCondor Pigeon} Pigeon allows queuing and forwarding of user log messages via AMQP. It consists of a broker and client tools. See: HtcondorPigeon {subsection: HTCondor Aviary} An alternative SOAP API to Birdbath that uses WSO2 and Axis2/C. See: HtcondorAviary {subsection: CondorAgent } An alternative API to the HTCondor scheduler based on a REST interface. {quote: CondorAgent} is a program that runs beside a HTCondor scheduler. It provides enhanced access to scheduler-based data and scheduler actions via a HTTP-based REST interface. {quote: CondorAgent} is deployed as either a shell script wrapped Python program (which requires Python 2.4 or greater) or as a Windows binary (which does not require a local Python installation). See: https://github.com/cyclecomputing/condor-agent {subsection: HTCondor Plumage} A NoSQL operational data store framework that uses mongodb. See: CondorPlumage {subsection: HTCondor Log Analyzer} {link: http://condorlog.cse.nd.edu This web site} allows you to upload log files generated by the HTCondor system, and get back graphics and an explanation of what happened in the system. This can aid in understanding a workload of hundreds or thousands of jobs. {subsection: HTCondor Log Viewer} Real-time visualization of events in the job event log via a Java Swing application. See: HtcondorLogViewer {subsection: HTCondor View} HTCondor View is used to automatically generate World Wide Web (WWW) pages displaying usage statistics of your HTCondor Pool. Included in the module is a shell script that invokes the condor_stats command to retrieve pool usage statistics from the HTCondor View server and generate HTML pages from the results. See HtcondorViewClient. {subsection: DMTCP/HTCondor Integration} DMTCP is a third part user space checkpointing library which, through a shim script and extra information in one's submit description file, can checkpoint vanilla universe jobs. See: DmtcpCondor {subsection: QMF management suite for HTCondor} QMF is a set of pluggable modules that assemble a suite for managing HTCondor jobs. It uses Apache Qpid for message transport See: QmfSuite {subsection: QMF trigger daemon} This daemon raises QMF Events based upon user defined ClassAd queries. See: QmfTriggerd {subsection: Stork} Stork is a batch scheduler specialized in data placement and data movement, which is based on the concept and ideal of making data placement a first class entity in a distributed computing environment. See: http://www.storkproject.org {subsection: Remote HTCondor} Remote HTCondor allows a user to submit and monitor batch jobs through a remote instance of HTCondor from his or her computer without having to install HTCondor locally. See: RemoteCondor {subsection: CL-MW: A Master/Slave Distributed Computing Library in Common Lisp} See: ClMw {subsection: HDFS} The Hadoop Distributed File System (HDFS) is a user space, distributed file system, maintained by the Apache project. The condor_hdfs daemon is a daemon which manages the running of the java-based hdfs daemon. See: HadoopDistributedFileSystemModule