Elasticity provider

Implement elasticity for the Load Balancer.

This can be kept as a separate process because different hosting IaaS solutions might need different approaches. For instance, on EC2, one might want to use the AutoScaling / CloudWatch approach or EC2 elastic load balancing.

See : Amazon autoscaling through the Boto library

Some assumptions made in the current module:

  • There can only be one consumer per machine (IP address)

Also, see this article (in French) for good insights on the generalities of elasticity.

class ServiceGateway.rubber.Rubber(vm_store_fn)[source]

Class implementing elasticity for the Load Balancer setup.

Keeps track of VMs and if they are occupied. Keeps track of quantity of requests on the queue. Establishes a metric to determine how many new VMs should be started.

This class should be aware of the resource limits of a given cloud space. That means we should know about the maximum number of VM instances our environment can handle (can also be dictated by economical issues).

There is also a concept of profiles. Meaning that for a given cloud space, a number of VM classes can share the total resources in an egalitarian manner.

The class reads the information produced by the Celery distributed task queue system from the configured backend.

check_slackers()[source]

Check for registered machines on cloud which do not register on worker queue.

Terminate any machine which has been slacking for more than a given time threshold defined by SLACKER_TIME_THRESHOLD.

evaluate_needs(profile)[source]

Checks the state of the queue, the number of registered workers, and establishes if we need more workers or otherwise if we can discard some workers which are idle.

The bulk of the elasticity logic is contained in this function. Then again, this function is idealistic in it’s demands, it does not know of the limits of the system.

Ideally this could be a control loop with feedback but for the moment the implementation is very simple.

Parameters:profile (string) – Name of the profile defining a worker class.
Returns:Delta of projected VM needs for a given profile. A positive value indicates that we need that number of new machines, a negative value indicates that we can discard that number of active machines. A value of zero means that no change to the number of workers seems required.
get_active_workers(queue_name)[source]

Get workers which are apparently working.

Parameters:queue_name – Name of the profile defining a worker class.
Returns:A list of all active workers.
get_idle_workers(queue_name)[source]

Get workers which are apparently not working.

Parameters:queue_name – Name of the profile defining a worker class.
Returns:A list of all idle workers.
get_queue_length(queue_name)[source]

Gets the number of pending messages in a given AMQP queue.

Parameters:queue_name – Name of the queue for which we want to get the number of pending messages.
Returns:Number of pending messages in the queue.
launch(evaluation_interval=120)[source]

This function launches a periodic check of the conditions for spawning or tearing down machines.

This function is blocking. It can be called to use an instance of this class as an application.

Parameters:evaluation_interval – Interval at which the state of the cloud will be evaluated. Defaults to EVAL_INTERVAL.
spawn(profile)[source]

Spawn a VM and return it’s IP address.

Parameters:profile (string) – Identifier of the profile to be spawned.
Raises:InsufficientResources
teardown(profile)[source]

Shut down a virtual machine.

Parameters:profile – Name of the profile for which we want to take down a worker.
Raises:MinimumWorkersReached
Raises:NoIdleWorkersError
Raises:NoTearDownTargets
ServiceGateway.rubber.is_booting(vm_info)[source]

Evaluate if a VM might still be in it’s booting stage

ServiceGateway.rubber.main()[source]

Script entry point

class ServiceGateway.rubber.Rubber(vm_store_fn)[source]

Class implementing elasticity for the Load Balancer setup.

Keeps track of VMs and if they are occupied. Keeps track of quantity of requests on the queue. Establishes a metric to determine how many new VMs should be started.

This class should be aware of the resource limits of a given cloud space. That means we should know about the maximum number of VM instances our environment can handle (can also be dictated by economical issues).

There is also a concept of profiles. Meaning that for a given cloud space, a number of VM classes can share the total resources in an egalitarian manner.

The class reads the information produced by the Celery distributed task queue system from the configured backend.