{section: Flow-based grid submissions, aka Streaming CE}
 
-Currently, grid CEs based around technologies like Globus GRAM, CREAM, or Condor-C essentially are services that submit a job into a local site scheduler.
+Currently, grid CEs based around technologies like Globus GRAM, CREAM, or HTCondor-C essentially are services that submit a job into a local site scheduler.
 
 This page contains thoughts re a "Streaming CE" (pseudonyms incl "Pilot Factory", or "flow submission").
 What we mean is a service that would submit flows of jobs;
@@ -71,7 +71,7 @@
    It is desirable that the user can specify the desired lease time, although the service has always the final word.
 
 i) The user must be able to ask if a handle is still valid, and if it is, how many resources have been requested, how many are currently actually provisioned and how many waiting there are.
-   The number of waiting is the number that, from the point of view of the service, is likely to be started in the near future. As an example, in a batch system, (provisioned+waiting) cannot exceed the total number of available resources. This number must also not include "problematic" jobs (e.g. Held in Condor).
+   The number of waiting is the number that, from the point of view of the service, is likely to be started in the near future. As an example, in a batch system, (provisioned+waiting) cannot exceed the total number of available resources. This number must also not include "problematic" jobs (e.g. Held in HTCondor).
    Please notice that the number of provisioned resources can be higher than the number of requested ones, due to a recent reduction request.
    It is desirable, but not required, to get even more information on the provisioning activity, like number recently started, number recently terminated, number internal submission errors, etc.
 
@@ -100,49 +100,49 @@
 {subsubsection: Client side Requirements}
 
 Above was about what the server would need to provide to satisfy my needs.
-The question is now: What do I need to do on the client. And in particular, a specific client: Condor-G. So, how could Condor-G support this?
+The question is now: What do I need to do on the client. And in particular, a specific client: HTCondor-G. So, how could HTCondor-G support this?
 
 Here is my proposal:
 
 I)   From the user point of view, nothing changes.
-     User still submits a job (or a bunch of jobs using queue X) at a time to Condor-G, as we have always done.
+     User still submits a job (or a bunch of jobs using queue X) at a time to HTCondor-G, as we have always done.
      Of course, we use a different Grid type.
 
-II)  Condor-G internally clusters the jobs based on the kind of resources requested (see (a)).
+II)  HTCondor-G internally clusters the jobs based on the kind of resources requested (see (a)).
 
 III) If there is at least one uniform cluster (as in (II)) without a valid "stream handle" (see (a) + (l)),
-     Condor-G will contact the Provisioning Service and obtain one.
+     HTCondor-G will contact the Provisioning Service and obtain one.
      The initial request is for len(cluster) resources.
 
-IV)  Condor-G now continuously monitors the progress of the request (see (i)).
-     Condor-G is free to label any local job as running, as long as the sum of them is the same as the number obtained from the Service, and it is deterministic.
-     Similarly, Condor-G should label the right number of local jobs as "Waiting"... i.e. using an appropriate GridStatus attribute.
+IV)  HTCondor-G now continuously monitors the progress of the request (see (i)).
+     HTCondor-G is free to label any local job as running, as long as the sum of them is the same as the number obtained from the Service, and it is deterministic.
+     Similarly, HTCondor-G should label the right number of local jobs as "Waiting"... i.e. using an appropriate GridStatus attribute.
      All the other jobs should either be labeled as "Unsubmitted" or they should be held (this gets possibly a bit hairy, but see (j1))
 
-V)   If a user submits another job with the same kind of resources, Condor-G will increase the number requested on the existing handle (see (e)).
+V)   If a user submits another job with the same kind of resources, HTCondor-G will increase the number requested on the existing handle (see (e)).
 
-VI)  If a user removes a job from the queue, Condor-G will decrease the number requested on the existing handle (see (f)).
+VI)  If a user removes a job from the queue, HTCondor-G will decrease the number requested on the existing handle (see (f)).
      If the job was idle at the time, the victim type is "NotRunning".
      If it is running, I guess the right type should be set in the config, together with the aggressiveness.
      While it may be interesting to be user specified, it may be too complicated... but I am not ruling it out, if you guys want to do it.
 
-VII) Condor-G will continuously probe the Service for newly finished "handle ids" (see (j2)).
+VII) HTCondor-G will continuously probe the Service for newly finished "handle ids" (see (j2)).
      For any new such id, it must retrieve the outputs, if at all possible (see (k)).
      These outputs must be associated with one of the jobs.
-     How Condor-G picks which jobs gets it, I don't really care... whatever works for you.
+     How HTCondor-G picks which jobs gets it, I don't really care... whatever works for you.
      We would also likely need to agree on some sort of convention on the file naming, and what to do with things like exit codes.
 
 IX)  Once the last local job associated with a valid handle has been removed, and all the output files have been retrieved,
-     Condor-G destroys the handle.
+     HTCondor-G destroys the handle.
 
-PS: I have not explicitly listed the renewal of the lease, but I expect Condor-G to take care of that, too.
+PS: I have not explicitly listed the renewal of the lease, but I expect HTCondor-G to take care of that, too.
 
 {subsection: Notes re Miron and Igors Streaming CE Concerns}
 [from Miron's visit to San Diego March 2012]
 
 Before going in streaming CE details, let me just point out that
 1: The vanilla universe is already doing "streaming"!
-1: Jobs can restart multiple times, due to preemption. Condor is just not making restarts a first-class citizen.
+1: Jobs can restart multiple times, due to preemption. HTCondor is just not making restarts a first-class citizen.
 1:Most of what we describe below would actually make sense in the vanilla universe as well.
 
 Miron's main concern is how we handle edge cases; everything else is easy
@@ -168,7 +168,7 @@
 *::     - One job out of 1k restarting once every 10 mins is OK (e.g. broken WN).
 *::     - 1k jobs all restarting every 10 mins is not OK (over 1Hz)
 *::   - We need to correlate the various requests as much as possible, and have aggregate limits
-*::     - This goes against the idea of "one Condor-G job per request"
+*::     - This goes against the idea of "one HTCondor-G job per request"
 *::     - *I have no obvious solution right now*
 
 
@@ -186,7 +186,7 @@
 *:::    + Is exit code the right thing?
 *:::    + Just first approximation (e.g. No problem/problem)? And we have a different mechanism for details?
 *:::    + Should we go even more fancy, and have actual complex policies?
-      (I am not advocating it for "Condor-G streamin", but it came up in the passing and may make sense in the generic vanilla universe)
+      (I am not advocating it for "HTCondor-G streamin", but it came up in the passing and may make sense in the generic vanilla universe)
 *::  - Once we know the above, we should probably throttle restarts of anything but "No problems"
 1: Restart limits (related to above)
 *::  - Even for "No problems", we want the client to provide a max restart rate (e.g. no more than 1 per hour)
@@ -214,6 +214,6 @@
 
 Finally, Miron wanted to see all the above discussed/digested in the "client land", i.e. how we express all of the above, before even attempting to go into how we express this in a RPC-like protocol.
 
-{subsection: A Streaming CE architecture based on Condor-C, JobRouter}
+{subsection: A Streaming CE architecture based on HTCondor-C, JobRouter}
 
 Ideas from Brian: https://docs.google.com/document/d/16HVDBLjAF5li42kue2us1SDvU1fNZfQicmgISn7VMls/edit