Programming, Technology

Elastic High-Availabilty Clustering With Puppet

In this post I’m going to demonstrate one method I discovered to facilitate HA clustering in your enterprise. The specific example I’m presenting here is how to easily roll out a RabbitMQ cluster to be used by the Nova (Compute) component of OpenStack. Some other applications which come to mind are load balancers, for example you assign a puppetmaster role to a node when provisioning and have it automatically added to Apache’s round robin scheduler. Thus if our monitoring software decides the existing cluster is under too much strain we can increase capacity in a matter of minutes from bare metal.

Exported Variables

Ideally what we want to provide this functionality is some form of exported variable, when collected, contains all instances of that variable i.e. each rabbit host would export its host name and these could be aggregated. Puppet supports neither exporting variables nor exporting resources with the same names. Custom facts weren’t going to cut it either as they are limited to node scope. Then I tripped upon a neat solution by the good folks at Example42. Their exported variables class quite cleverly exports a variable as a file

define exported_vars::set (
  $value = '',
) {
  @@file { "${dir}/${::fqdn}-${title}":
    ensure  => present,
    content => $value,
    tag     => 'exported_var',

Which is realized on the puppet master by including the following class

class exported_vars {
  file { $dir:
    ensure => directory,
  File <<| tag == 'exported_var' |>>

And then a custom function is able to look at all the files in the directory, matching on FQDN, variable name, both returning an array of values. It also defaults to a specified value if no matches are found. Perfect!

Elastic RabbitMQ Cluster

Here’s a concrete example of putting this pattern to use

class profile::nova_mq {

  $nova_mq_username = hiera(nova_mq_username)
  $nova_mq_password = hiera(nova_mq_password)
  $nova_mq_port     = hiera(nova_mq_port)
  $nova_mq_vhost    = hiera(nova_mq_vhost)

  $mq_var = 'nova_mq_node'

  exported_vars::set { $mq_var:
    value => "${::fqdn}",

  class { 'nova::rabbitmq':
    userid             => $nova_mq_username,
    password           => $nova_mq_password,
    port               => $nova_mq_port,
    virtual_host       => $nova_mq_vhost,
    cluster_disk_nodes => get_exported_var('', $mq_var, ['localhost']),
  contain 'nova::rabbitmq'


A quick run through of what happens. When first provisioned the exported variable is stored in PuppetDB, and the RabbitMQ server is installed. Here we can see the get_exported_var function being used to gather all instances of nova_mq_node that exist, but as this is the first run of the first node we default to an array containing only the local host. When the puppet agent next runs on the puppet master, the exported file is collected and executed. Finally the second run on the RabbitMQ node will pickup the exported variable and add it to the list of cluster nodes.


Some notes to be aware of

  • exported_vars doesn’t recursively purge the directory by default, so nodes which are retired leave their old variables lying about, you’d also need to have dead nodes removed from puppetdb too
  • there are no dependencies between the file and directory creation, so it may take a couple runs to get fully synced
  • with load balanced puppet masters it’s a bit hit or miss as to whether one has collected the exported variables or not when you run your agents. This can be mitigated by provisioning the variable directory on shared storage (think clustered NFS on a redundant GFS file system)

And there you have it, almost effortless elasticity to your core services provided by Puppet orchestration.