The naive way to do this is to know ahead of time exactly which machines have the software, and then constrain the =Requirements= expression of the job so that it only matches those machines. Knowing ahead of time is problematic and difficult to maintain as the pool changes. A better solution uses this two-step process. First, each machine that has the special software installed needs to advertise its availability in its machine ClassAd. Second, the job identifies in its submit description file that it requires a machine that has the special software. A machine advertises the presence of special software in its local configuration file, as in this example: {code} HAS_MY_SOFTWARE = True {endcode} Also, if =STARTD_ATTRS= is already defined in that file, add =HAS_MY_SOFTWARE= to the list. If =STARTD_ATTRS= is not already in that local configuration file, add the line: {code} STARTD_ATTRS = HAS_MY_SOFTWARE, $(STARTD_ATTRS) {endcode} For this configuration change to take effect, the condor_startd on that machine needs to be reconfigured. Use =condor_reconfig -startd=. Each machine with the configuration change must be reconfigured. Double check that this has been correctly implemented by running the condor_status command: {code} condor_status -constraint HAS_MY_SOFTWARE {endcode} Jobs that need to run on the machines with the special software installedadd a =Requirements= command to their submit description file: {code} Requirements = (HAS_MY_SOFTWARE =?= True) {endcode} Be sure to use *=?=* instead of *==*, so that if a machine does not have the =HAS_MY_SOFTWARE= configuration variable defined, the job's =Requirements= expression will not evaluate to =Undefined=, preventing the job from running anywhere.