Altova FlowForce Server 2024 

Queue settings enable you to control the usage of server resources more efficiently. For example, through queue configuration, you can limit the number of job instances running in parallel at any given moment.


An execution queue is a processor of jobs. It controls how job instances run. In order to run, every job instance is assigned to a target execution queue. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time and the delay between runs. By default, the queue settings are local to the job, but you can also define queues as standalone objects shared by multiple jobs. When multiple jobs are assigned to the same execution queue, they will share that queue for executing.


Queues benefit from the same security access mechanism as other FlowForce Server configuration objects. Namely, a user must have the Define execution queues privilege in order to create queues (see also Define Users and Roles). In addition, users can view queues and assign jobs to queues if they have appropriate container permissions (see also How Permissions Work). By default, any authenticated user gets the Queue - Use permission, which means they can assign jobs to queues. To restrict access to queues, navigate to the container where the queue is defined and change the permission of the container to Queue - No access for the role authenticated. Next, assign the permission Queue - Use to any roles or users that you need. For more information, see Restricting Access to the /public Container.


Global vs local queues

You can create a queue as a standalone object (global) or within the framework of a particular job (local). Local queues do not support distributed processing (clusters). The queue must be created as a standalone object (external to the job) in order to benefit from distributed processing. Distributed processing is supported only in Advanced Edition. For information about creating standalone and local queues, see the subsections below.


Create global queues

To create a queue as a standalone object, take the steps below:


1.Open the Configuration page and navigate to the container where you want to create the queue.

2.Click Create and select Create Queue (screenshot below).


3.Enter a queue name, and, optionally, a description.

4.Configure the relevant settings. For details, see Queue Settings below.

5.Click Save.


Queue settings

The queue-configuration settings are listed below.


Queue name

A name that identifies the queue. This is a mandatory field. It may contain only letters, digits, single spaces, and the underscore (_), dash (-), and full stop (.) characters. It may not start or end with spaces. This field is applicable only if the queue is defined as a standalone (not local) queue.


Queue description

Optional description of the queue object. This field is applicable only if the queue is defined as a standalone (not local) queue.


Run on (Advanced Edition)

Specifies how all job instances from this queue are to be run:


Master or any worker: Job instances that are part of this queue will run on the master or worker machines, depending on available server cores.

Master only: Job instances will run only on the master machine.

Any worker only: Job instances will run on any available worker but never on the master.


Minimum time between runs

An execution queue provides execution slots. Each slot will execute job instances sequentially.


The Minimum time between runs setting keeps a slot marked as occupied for a short duration after a job instance has finished, so it will not pick up the next job instance right away. This reduces maximum throughput for this execution queue, but provides CPU time for other execution queues and other processes on the same machine.


Maximum parallel runs

This option defines the number of execution slots available on the queue. Each slot executes job instances sequentially, so the setting determines how many instances of the same job may be executed in parallel in the current queue. Note, however, that the number of instances you allow to run in parallel will compete over available machine resources. Increasing this value could be acceptable for queues that process lightweight jobs that do not perform intensive I/O operations or need significant CPU time. The default value (1 instance) is suitable for queues that process resource-intensive jobs, which helps ensure that only one such heavyweight job instance is processed at a time.


This option does not affect the number of maximum parallel HTTP requests accepted by FlowForce Server (such as those from clients that invoke jobs exposed as Web services). For details, see Reconfiguring FlowForce Server pool threads.



Multiple sets of queue settings (Advanced Edition)

You can define multiple sets of queue settings, each with different processing requirements, by clicking the add button. To change the priority of a specific set of settings, click the Move up up_arrow or Move down down_arrow buttons. For example, you can define a rule for the case in which only the master is available and another rule for the case in which both the master and its workers are available. This enables you to create a fallback mechanism for the queue, depending on the state of the cluster at a given time. When processing queues, FlowForce Server constantly monitors the state of the cluster and knows if any worker is unavailable. So, if you defined multiple queue settings rules, FlowForce Server evaluates them in the defined order, from top to bottom, and picks the first rule that has at least one cluster member assigned according to the Run On setting.


Example (Advanced Edition)

As an example, let's consider a setup where the cluster includes one master and four worker machines. The queue settings are defined as shown below:


With the configuration illustrated above, FlowForce would process the queue as follows, depending on the current state of the cluster:


If all workers are available, the top rule will apply. Namely, up to 16 job instances are allowed to run simultaneously (4 instances for each worker). The minimum time between runs is 0 seconds.

If only three workers are available, the top rule will still apply. Namely, up to 12 job instances are allowed to run simultaneously, and the minimum time between runs is 0 seconds.

If no workers are available, the second rule will apply. Namely, only 1 instance may run at a given time, and the minimum time between runs is 5 seconds.


This kind of configuration makes execution still possible in the absence of workers. Notice that the master only rule is stricter (1 instance only, and 5 seconds delay between runs) so as not to take away too much processing power from the master machine when all the workers fail.


Assign jobs to queues

Once you have configured the queue, you will need to assign a job to it on the job configuration page. In order to do this, take the steps below:


1.Open the configuration of the job that you wish to assign to the queue.

2.Navigate to the queue settings at the bottom of the page.

3.Select the Select existing queue option and provide the path to the desired queue object (screenshot below).



Define local queues

As an alternative to creating standalone queues, you can define the queue settings locally inside the job. To do this, select the Define local queue option from the job configuration page and specify your queue preferences. The image below illustrates the default queue settings. With the Define local queue option selected, FlowForce Server will assign, at job runtime, the instances of this job to a default queue, with the local settings you specify.



For details about the Minimum time between runs and Maximum parallel runs properties, see Queue Settings above.


© 2018-2024 Altova GmbH