Distributed Execution Terminology

www.altova.com Print this Topic Previous Page Up One Level Next page

Home >  Distributed Execution >

Distributed Execution Terminology

The following terminology is used in conjunction with distributed execution and load balancing.

 

Server Instance

A server instance is a running and licensed installation of FlowForce Server. Both services (FlowForce Web Server and FlowForce Server) are assumed to be up and running on the machine.

 

Job instance

A job instance is not the same as a job. When you configure a FlowForce job from the job configuration page, you create in fact a job configuration. Every time when the defined trigger criteria for a job apply, an instance of the job starts running. Job instances are distributed within the cluster as defined by the execution queue associated with the job. A job instance will always run in its entirety on a single cluster member.

 

Cluster

A cluster represents several service instances of FlowForce Server that communicate for the purpose of executing jobs in parallel or redistributing jobs if any instance is not available. A cluster consists of one "master" FlowForce Server and one or several "workers".

 

Master

A "master" is a FlowForce Server instance that continuously evaluates job-triggering conditions and provides the FlowForce service interface. A master is aware of worker machines in the same cluster and may be configured to assign job instances to them, in addition to (or instead of) processing job instances itself.

 

Worker

A FlowForce Server instance that is configured to communicate with a master instance instead of executing any local jobs. A worker can execute only jobs that a master FlowForce Server has assigned to it.

 

Execution Queue

An execution queue is a "processor" of jobs; it controls how job instances run. In order to run, every job is assigned to a target execution queue. You can assign a job to an execution queue while configuring the job, and it will be submitted to that execution queue at runtime. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time, the delay between runs, and other settings. Queues can be local to the job, or shared by multiple jobs. When multiple jobs are assigned to the same execution queue, they will share that queue for executing.


© 2019 Altova GmbH