Altova FlowForce Server 2024 Advanced Edition

To improve data throughput and provide basic fault tolerance, you can configure multiple FlowForce Server instances to run as a cluster. This provides the following benefits:

 

Load balancing

Leaner resource management

Scheduled maintenance

Reduced risk of service interruption

 

Note:Cross-system clusters are not supported, which means that a worker-master connection cannot be established between different OS platforms (e.g., between Linux and Windows).

 

Load balancing

When hardware limits cause FlowForce Server to be overwhelmed by multiple job instances running simultaneously, it is possible to redistribute workload to another running instance of FlowForce Server (a so-called "worker"). You can set up a cluster comprised of a master machine and multiple worker machines and thus take advantage of all the licensed cores in the cluster.

 

Leaner resource management

One of the machines designated as a master continuously monitors job triggers and allocates queued items to workers or even to itself, depending on the configuration. You can configure the queue settings and assign a job to a particular queue. For example, you can configure the master machine not to process any job instances at all. This will allow freeing up the master's resources and dedicating them exclusively to the continuous provision of FlowForce Service as opposed to data processing.

 

Scheduled maintenance of workers

You can restart or temporarily shut down any running instance of FlowForce Server that is not the master, without interrupting the provision of service. Note that the master is expected to be available at all times; restarting or shutting it down will still interrupt the provision of service.

 

Reduced risk of service interruption

In the case of hardware failures, power outages, unplugged network cables, etc., the impact depends on whether the affected machine is a worker or a master:

 

If the machine is a worker, any running FlowForce job instances on that worker will be lost. However, the general provision of FlowForce service will not be lost, because new instances of the same job will be taken over by a different worker (or by the master, if configured). The execution status of the job, including failure, is reported to the master and is visible in the job log so that an administrator can take appropriate action manually.

If the machine is a master, the provision of service will be lost. In this case, new job instances cannot start as long as the master is unavailable.

 

Terminology

The following terminology is used in conjunction with distributed execution and load balancing.

 

Server instance

A server instance is a running and licensed installation of FlowForce Server. Both services (FlowForce Web Server and FlowForce Server) are assumed to be up and running on the machine.

 

Cluster

A cluster represents several service instances of FlowForce Server that communicate for the purpose of executing jobs in parallel or redistributing jobs if any instance is not available. A cluster consists of one master FlowForce Server and one or several workers.

 

Master

A master is a FlowForce Server instance that continuously evaluates job-triggering conditions and provides the FlowForce service interface. The master is aware of worker machines in the same cluster and may be configured to assign job instances to them, in addition to or instead of processing job instances itself.

 

Worker

A FlowForce Server instance that is configured to communicate with a master instance instead of executing any local jobs. A worker can execute only jobs that a master FlowForce Server has assigned to it.

 

Execution queue

An execution queue is a processor of jobs. It controls how job instances run. In order to run, every job instance is assigned to a target execution queue. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time and the delay between runs. By default, the queue settings are local to the job, but you can also define queues as standalone objects shared by multiple jobs. When multiple jobs are assigned to the same execution queue, they will share that queue for executing.

 

Queues benefit from the same security access mechanism as other FlowForce Server configuration objects. Namely, a user must have the Define execution queues privilege in order to create queues (see also Define Users and Roles). In addition, users can view queues and assign jobs to queues if they have appropriate container permissions (see also How Permissions Work). By default, any authenticated user gets the Queue - Use permission, which means they can assign jobs to queues. To restrict access to queues, navigate to the container where the queue is defined and change the permission of the container to Queue - No access for the role authenticated. Next, assign the permission Queue - Use to any roles or users that you need. For more information, see Restricting Access to the /public Container.

 

 

© 2018-2024 Altova GmbH