Altova FlowForce Server 2024 Advanced Edition

Queue settings enable you to control usage of server resources more efficiently. For example, through queue configuration, you can limit the number of job instances running in parallel at any given moment.

 

An execution queue is a "processor" of jobs; it controls how job instances run. In order to run, every job instance is assigned to a target execution queue. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time and the delay between runs. By default, the queue settings are local to the job, but you can also define queues as standalone objects shared by multiple jobs. When multiple jobs are assigned to the same execution queue, they will share that queue for executing.

 

Queues benefit from the same security access mechanism as other FlowForce Server configuration objects. Namely, a user must have the "Define execution queues" privilege in order to create queues, see also How Privileges Work. In addition, users can view queues, or assign jobs to queues, only if they have appropriate container permissions (not the same as privileges), see also How Permissions Work. By default, any authenticated user gets the "Queue - Use" permission, which means they can assign jobs to queues. To restrict access to queues, navigate to the container where the queue is defined, and change the permission of the container to "Queue - No access" for the role authenticated. Next, assign the permission "Queue - Use" to any specific roles or users that you need. For more information, see Restricting Access to the /public Container.

 

Creating standalone queues

To create a queue as a standalone object:

 

1.Click Configuration, and then navigate to the container where you want to create the queue.

2.Click Create, and then Create Queue.

ff_create_queue

3.Enter a queue name, and, optionally, a description. For reference to all settings, see "Queue settings" below.

4.Click Save.

 

Defining local queues

As an alternative to creating standalone queues, you can define the queue settings locally inside the job. To do this, select the Define local queue option from the job configuration page and then specify your queue preferences. The image below illustrates the default queue settings.

ffadv_queue

 

If you choose the Select existing queue option, you must specify a standalone, external queue defined previously. For reference to the Minimum time between runs and Maximum parallel runs settings, see the "Queue settings" section below.

 

Queue settings

The settings available for configuration in a queue are listed below.

 

Queue name

Enter a name that identifies the queue. This is a mandatory field, and it must not start or end with spaces. Also, it may contain only letters, digits, single spaces, and the underscore ("_"), dash ("-"), and full stop (".") characters.

 

This field is applicable only if the queue is defined as standalone (not local) queue.

Queue description

Optionally, enter a description for the queue object.

 

This field is applicable only if the queue is defined as a standalone (not local) queue.

Run on

Specifies how all job instances from this queue are to be run:

 

master or any worker - Job instances that are part of this queue will run indiscriminately on the master or worker machines, depending on available server cores.

master only - Job instances will run only on the master machine.

any worker only - Job instances will run on any available worker but never on master.

Minimum time between runs

An execution queue provides execution slots, where the number of available slots is governed by the "maximum parallel runs" setting multiplied by the number of workers assigned according to the currently active rule. Each slot will execute job instances sequentially.

 

The "Minimum time between runs" setting keeps a slot marked as occupied for a short duration after a job instance has finished, so it will not pick up the next job instance right away. This reduces maximum throughput for this execution queue, but provides CPU time for other execution queues and other processes on the same machine.

Maximum parallel runs

This option defines the number of execution slots available on the queue. Each slot executes job instances sequentially, so the setting determines how many instances of the same job may be executed in parallel in the current queue. Note, however, that the number of instances you allow to run in parallel will compete over available machine resources. Increasing this value could be acceptable for queues that process "lightweight" jobs that do not perform intensive I/O operations or need significant CPU time. The default setting 1 is the most conservative and is suitable for queues that process resource-intensive jobs (so as to ensure only one such "heavyweight" job instance is processed at a time).

 

This option does not affect the number of maximum parallel HTTP requests accepted by FlowForce Server (such as those from clients that invoke jobs exposed as Web services). For details, see Reconfiguring FlowForce Server pool threads.

 

You can define multiple sets of queue settings, each with different processing requirements, by clicking the add button. For more information about such setups, see Setting up Distributed Execution.

 

© 2017-2023 Altova GmbH