Posts by Jenny Centred

Programming, Technology

Integrating Icinga2 with InfluxDB and Grafana

Typically when you are monitoring a platform for performance metrics you will inevitably probably end up considering things like Collectd or Diamond for collection of metrics and Graphite for receipt, storage and visualisation of metrics.  That was state of the art 3 years ago and times change rapidly in computing.  I’d like to take you on a journey of how we developed our current monitoring, alerting and visualisation platform.

The Problem With Ceph…

Ceph is the new starlet on the block for scale-out, fault-tolerant storage. We’ve been operating a petabyte scale cluster in production for well over 2 years now, and one of the things you will soon learn is that when a journal drive fails it’s a fairly big deal. All drives reliant on that journal disk are fairly quickly removed from the cluster which results in objects replicating to replace the lost redundancy, and redistributing objects across the cluster to cater for the altered topology. This process, depending on how much data is in the cluster can take days to complete, and unfortunately have a significant impact on client performance. Luckily we as an operator handle these situations for you in a way that minimises impact, typically as a result of being woken up at 3am!

The dream however is to predict that an SSD journal drive is going to fail and pro-actively replace it during core working hours, transparent to the client. Initially with one vendor’s devices, we noted that I/O wait times increased quite dramatically before the device failed completely, giving plenty of notice (in the order of days) that the device should be replaced. Obviously this has a knock-on effect on storage performance as writes to the cluster are synchronous to ensure redundancy, so not the best situation.

Eventually we changed to use of devices from another manufacturer, which last longer and offer better performance. The downside is they no longer exhibit slow I/O. They just go pop, cue mad scramble to stop the cluster rebalancing and replace the failed journal hastily.

Can we do anything to predict failure with these devices? The answer is possibly. SMART monitoring of ATA devices allows the system to interrogate the device and pull off a number of metrics and performance counters that may lead to clues as to impending failure. Existing monitoring plug-ins that are available with our operating system were only capable of working with directly attached devices, so monitoring ATA devices behind a SAS expander device was impossible, and also only alert when the SMART firmware predicts a failure, which I have never seen in the field! This led to my authoring of the check-scsi-smart plug-in which allows the vast majority of devices in our platform to be monitored, every available counter to be individually monitored via performance data, and individually raise alerts based on user provided warning and critical thresholds.

Data Collection

A while ago I made the bold (most will say sensible) statement that Nagios/Icinga was no longer fit for purpose and needed replacing with a modern monitoring platform. My biggest gripes were the reliance on things like NRPE and NSCA. The former has quite frankly, broken and insecure transport. The latter has none, so when it comes to throwing possibly sensitive monitoring metrics across the public internet in plain text these solutions were pretty much untenable.

Luckily the good folks at Icinga had been slavishly working away at a ground up replacement of the old Nagios based code. Icinga2 is a breath of fresh air. All communications are secured via X.509 public key cryptography, they are initiated by either end point so work behind a NAT boundary. Hosts can monitor themselves, thus distributing the load across the platform, they can also raise notifications about themselves, so you are no longer reliant on a central monitoring server, however check results are propagated towards the root of the tree. Configuration is generated on a top-level master node and propagated to satellite zones and end hosts. The system is flexible so it need not work in this way, but I’ve arrived at this architecture as a best practice.

For me the real genius is how service checks are applied to hosts. Consider the following host definition:

object Host "ceph-osd-0.example.com" {
  import "satellite-host"

  address = "10.10.112.156"
  display_name = "ceph-osd-0.example.com"
  zone = "icinga2.example.com"

  vars.kernel = "Linux"
  vars.role = "ceph_osd"
  vars.architecture = "amd64"
  vars.productname = "X8DTT-H"
  vars.operatingsystem = "Ubuntu"
  vars.lsbdistcodename = "trusty"
  vars.enable_pagerduty = true
  vars.is_virtual = false

  vars.blockdevices["sda"] = {
     path = "/dev/sda"
  }
  vars.blockdevices["sdb"] = {
     path = "/dev/sdb"
  }
  vars.blockdevices["sdc"] = {
     path = "/dev/sdc"
  }
  vars.blockdevices["sdd"] = {
     path = "/dev/sdd"
  }
  vars.blockdevices["sde"] = {
     path = "/dev/sde"
  }
  vars.blockdevices["sdf"] = {
     path = "/dev/sdf"
  }
  vars.blockdevices["sdg"] = {
     path = "/dev/sdg"
  }

  vars.interfaces["eth0"] = {
     address = "10.10.112.156"
     cidr = "10.10.112.0/24"
     mac = "00:30:48:f6:de:fe"
  }
  vars.foreman_interfaces["p1p1"] = {
     address = "10.10.104.107"
     mac = "00:1b:21:76:86:d8"
     netmask = "255.255.255.0"
  }
  vars.interfaces["p1p2"] = {
     address = "10.10.96.129"
     cidr = "10.10.96.0/24"
     mac = "00:1b:21:76:86:d9"
  }

}

 

Importing satellite-host basically inherits a number of parameters from a template that describe how to check that the host is alive, and how often. The zone parameter describes where this check will be performed from e.g. the north bound icinga2 satellite. The vars data structure is a dictionary of various key value pairs and can be utterly arbitrary. In this example we define everything about the operating system, the architecture and machine type, whether or not the machine is virtual. Because this is generated by Puppet orchestration software, we can inspect even more parts of the system e.g. block devices, network interfaces. The possibilities are endless.

object CheckCommand "smart" {
  import "plugin-check-command"
  command = [ "sudo", PluginDir + "/check_scsi_smart" ]
  arguments = {
     "-d" = "$smart_device$"
  }
}

 

The CheckCommand object defines a executable to perform a service check. Here we define the check as having to run with elevated privileges, and its absolute path. You can also specify potential arguments, in this case if the macro smart_device is able to be expanded (it will look in host or service variables for a match) then the option will be generated on the command line with the parameter. There is also provision to set the option only without a parameter if needs be.

apply Service "smart" for (blockdevice => attributes in host.vars.blockdevices) {
  import "generic-service"
  check_command = "smart"
  display_name = "smart " + blockdevice
  vars.smart_device = attributes.path
  zone = host.name
  assign where match("sd*", blockdevice)
}

 

The last piece of the jigsaw is the Service object. Here we are saying that for each blockdevice/attributes pair on each host, if the block device name begins with sd then apply the service check to it. This way you write the service check once, and it will be applied correctly to every SCSI disk on every host, no host specific hacks involved ever. Much like the host definition generic-service is a template that defines how often a check should be performed, the zone which performs the check is the host itself. The check_command defines which check to perform, as defined above, and we set vars.smart_device to the device path of the block device which will be picked up by the macro expansion in the check command as discussed earlier.

Time Series Data Collection

With that all in place we now have a single pane of glass view onto all current states of all SCSI devices on all hosts. However what we really need is to gather all of these snapshots into a database which allows us to plot the counters over time, derive trends that indicate potential disk failure and then set alerting thresholds accordingly.

Anecdotally we previously had Graphite Carbon aggregating statistics we gathered via Collectd. However with several hundred servers sending many tens of metrics a second, it wasn’t up to the task. Even with local SSD backed storage the I/O queues were constantly full to capacity. We needed a better solution, and one which looked promising was InfluxDB. Although a fledgling product still in flux, it is built to perform many operations in memory, support clustering for horizontal scaling and be schema-less. To illustrate take a look at the following example from my test environment.

load,domain=angel.net,fqdn=ns.angel.net,hostname=ns,service=load,metric=load15,type=value value=0.05 1460907584

 

The measurement load is in essence a big bucket that all metrics to do with load fit into. Arbitrary pieces of meta data can be associated with a data point, here we attach the domain, fqdn and hostname which are useful for organising data based on physical location. The metric correlates with a performance data metric returned by a monitoring plug-in, the type references the field in the performance data, in this case referring to the actual value, but it may represent alerting thresholds or physical limits. The value records the actual data value and the final parameter is the time stamp, in this case to a second precision, but defaults to nanoseconds.

By arranging data like this you can ask questions such as, give me all metrics of type value, from the last hour for hosts in a specific domain, grouping the data by host name. I for one find this a lot more intuitive than the existing methodologies bound up in Graphite. You can also query the meta data asking questions like, for the load metric, give me all possible values of hostname, which makes automatically generating fields a dream.

The missing part in this puzzle is getting performance data from Icinga2 into InfluxDB, along with all the tags which makes InfluxDB so powerful. Luckily I was able to spend a few days making this a reality, although at the time of press still in review, it looks set to be a great addition to the ecosystem.

library "perfdata"

object InfluxdbWriter "influxdb" {
  host = "influxdb.angel.net"
  port = 8086
  database = "icinga2"
  ssl_enable = false
  ssl_ca_cert = "/var/lib/puppet/ssl/certs/ca.pem"
  ssl_cert = "/var/lib/puppet/ssl/certs/icinga.angel.net.pem"
  ssl_key = "/var/lib/puppet/ssl/private_keys/icinga.angel.net.pem"
  host_template = {
    measurement = "$host.check_command$"
    tags = {
      fqdn = "$host.name$"
      domain = "$host.vars.domain$"
      hostname = "$host.vars.hostname$"
    }
  }
  service_template = {
    measurement = "$service.check_command$"
    tags = {
      fqdn = "$host.name$"
      domain = "$host.vars.domain$"
      hostname = "$host.vars.hostname$"
      service = "$service.name$"
      fake = "$host.vars.nonexistant$"
    }
  }
}

 

Here’s the current state of play; it allows a connection to any port on any host, specification of the database to write to, with optional full SSL support. The powerful piece is in the host and service templates which allow the measurement to be set, typically the check_command e.g. ssh, smart, and any tags that can be derived from the host or service objects, if the value doesn’t exist, the tag is not generated for the data point. Remember how we can associate all manner of meta data with a host, well all of that rich data is available here to be used as tags.

Presentation Layer

Putting it all together we need to visualise this data, and choose Grafana. Below is a demonstration of where we are today.

The dashboard is templated on the domain, which is extracted from InfluxDB meta data. We can then ask for all hosts within that domain, and finally all mount points on that host in that domain. This makes organising data simple, flexible and extremely powerful. Going back to my example on Ceph journals, I can now select the domain a faulty machine resides in, select the host, the disk that has failed, and then look at individual performance metrics over time to identify predictive failure indicators which can then be fed back into the monitoring platform as alert thresholds. Luckily I have as of yet been unable to test this theory as nothing has gone pop yet.

There you have it, from problem to modern and powerful solution. I hope this inspires you to have a play with these emerging technologies and come up with innovative ways to monitor and analyse your estates, and predict failures or plan of capacity trends.

Update

Quite soon after this functionality was introduced we experienced an OSD journal failure. Now to put the theory to the test…

 

As the graphic depicts for the failing drive certain counters will start to increase from zero before the drive is about to fail. Importantly these will gradually increase over time for a period of several weeks before the drive fatally fails. Crucially we now have visibility of potential failures and can replace them in time periods which will be less likely to cause customer impact, and can be handled at more healthy time of the day. Failures can also be correlated with logical block addresses written, which now enables us to predict operating expenditure over the lifetime of the cluster.

Updated blog post 23 August 2016

Icinga 2.5 is now in the wild! See my updated blog post on integrating your own monitoring platform with InfluxDB and Grafana.

Programming, Technology

Ceph Monitoring with Telegraf, InfluxDB and Grafana

Ceph Input Plug-in

An improved Ceph input plug-in for Telegraf is the core of how Data News Blog collect metrics to be graphed and analysed. You can follow the progress here as the code makes its way into the main release.  Eventually you too can enjoy it as much as we do.

Our transition to InfluxDB as our time-series database of choice motivated this work. Some previous posts go some way to showing why we love InfluxDB. The ability to tag measurements with context specific data is the big win. It helps us to create simplified dashboards which a less clutter which dynamically adapt.

The existing metrics were collected with the Ceph collector for Collectd and stored in Graphite.  Like for like functionality was not available for Telegraf so we decided to contribute code that met our needs.  Setting up the Ceph input plug-in for Telegraf is intended to be simple.  For those familiar with Ceph all you need to do is make a configuration available which can find the cluster.  You will also need a key which provides access to the cluster.

Configuration

The following shows a typical set up.

[[inputs.ceph]]
  interval = '1m'
  ceph_user = "client.admin"
  ceph_config = "/etc/ceph/ceph.conf"
  gather_cluster_stats = true

The interval setting is set to be fairly relaxed. When the system is under heavy load e.g. during recovery operations, measurement collection can take some time.  Instead of have the collection time-out we make sure that there is enough time for it to complete.  After all the reason we want to see the measurements is to see what happens when these heavy operations happen.  It is no good if we have no data.  We chose to do this work also as the collectd plug-in fell in to this trap.

The ceph_user specifies a specific user to attach to the cluster with.  It allows the collector to find the access key and also can optionally pick up additional settings from the configuration file.  The default of client.admin can be automatically found by the plug-in by the ceph command when run.  Key location can be also be set in the configuration file for the user if necessary.

The ceph_config setting tells the plug-in where to find the settings for your ceph cluster.  Normally this will tell us where we can make contact with it and also how to authorise the user.  Finally the gather_cluster_stats option turns the collection of measurements on.

Measurements

So what does the plug-in measure?  It all comes down to running the ceph command.  People who have used this before should have an idea about what it can do.  For now the plug-in collects the cluster summary, pool use and pool statistics.

The cluster summary (ceph status) measures things like how many disks you have, if they are in the cluster and if they are running.  It also gives a summary of the amount of space used and available, how much data is being read and written and the number of operations being performed.  The final things measured are the states of placement groups so you can see how many objects are in a good state, and how many need to be fixed to bring the cluster back into a healthy state.

Pool use (ceph df) show you the amount of memory available and used per pool.  It also shows you the number of objects stored in each pool.  These measurements are tagged with the pool name.  This is useful because pools may be located on specific groups of disks, for example hard drives or flash drives.  You can then monitor and manage these as logically separate entities.

Pool statistics (ceph pool stats) much like global statistics show on a per pool level the number of reads, writes and operations each pool is handling.  Again these are tagged with the pool name and can be used to managed hard drive and solid state drives independently even though they are part of the same cluster.

Show Me The Money

A brief look at what can be collected is all well and good however a real life demonstration speaks a thousand words.

Here is a live demonstration of the plug-in running during an operation performed recently.  This was an operation that moved objects between servers so that we are now able to handle an entire rack failing.  This protects us against a switch failure and allows us to power off a rack to reorganise it.

Global Cluster Statistics

The top pane shows the overall cluster state.  The first graph on the left shows the state of all placement groups.  When the operation begins groups that were clean become misplaced and must be moved to new locations.  From this we can make predictions into how long the maintenance will take and provide feed back to our customers.  You can also see a distinct change in the angle of the graph as the SSD storage completes.  Substantially quicker I think you’ll agree!

To the right we can see the number of groups which are degraded e.g only have two object copies not the full three, and the number of misplaced objects.  The former is interesting in that it show how many objects are at risk from a component failure which would reduce the number of copies down to one.

Per-Pool Statistics

The lower pane is constructed from the pool name.  It is selected at the top of the page.  Here we are displaying (left to right, top to bottom) the number of client operations per second, the storage used and available, the amount of data read and written, and finally the number of objects recovering per second.

Here we can see that although the peak number of client operations are reduced they hardly go below the minimum seen before the operation stated.  This is good news because it means we can handle the customer workload and recover without too much disruption.  Importantly we are able to quantify the impact a similar operation is likely to have in the future.

Some other interesting uses would be to watch for operations, reads or writes ‘clipping’ which would mean you have reached the available limits of the devices and need to add more.  If the pool is less concerned for performance and more with the amount of data, such as a cold storage pool, then the utilisation graph can be used to plan for the future and predict when you will need to expand.

Summing Up

We have demonstrated the upcoming improvements to the Ceph input plug-in for Telegraf, shown what can be collected with it and how this can improve your level of service by gleaning insight into the impact of maintenance on performance, and predicting future outcomes.

As always if you like it, please try it out, share your experiences and help us to improve the experience of running a Ceph cluster for the world as a whole.  The InfluxData community is very friendly in my experience so if you want to make improvements to this or other input plug-ins give it a go!

Update 31 August 2016

As of today the patch has hit the master branch so feel free to check out and build the latest Telegraf. Alternatively it will be released in the official 1.1 version.

Technology

Icinga 2.5 and InfluxDB

This post looks into the official release of Icinga 2.5 featuring the InfluxDB writer plug-in. In my previous post I delved into why we integrated Icinga 2 with InfluxDB, with excellent results, most of all not a single storage related alert since and blissful sleep my reward. That was however performed with a very early cut of the code before it had even hit the community servers. Icinga 2.5 is released today to the general public after 5 months of finesse, bug fixing and improvements to error reporting. I think it only right to outline the official line protocol used to transfer data from Icinga2 to InfluxDB, my personal Icinga2 configuration and how to get the most out of your performance metrics with Grafana.

Line Protocol

Before discussing configuration lets have a look at what actually gets passed on the wire between Icinga and the InfluxDB server.

disk,domain=angel.net,fqdn=puppet.angel.net,hostname=puppet,instance
=/,metric=/ crit=38016122880,max=42240835584,value=9263120384,warn=33792458752 1471951338

A quick refresher for those unfamiliar with InfluxDB. The first element in the line protocol is the measurement name, in this case it is data from the disk check. An optional list of tags follows which are utterly arbitrary text keys and values. A second list defines fields which may be any number of types of typed measurement e.g. floating point, integer, boolean etc. The last figure is the time stamp.

The bits that Icinga 2 gives you for free are:

metric
this tag is the label associated with a check’s performance data in this case ‘/’, the mount point being examined
value
this field is the value returned by the performance data
min,max,warn,crit
these fields are optionally added if available from the performance data and enabled with the enable_send_thresholds option

We format all fields extracted from performance data as floating point values as we have no idea what type the original script intended. You can also enable meta-data fields e.g. check state, with the enable_send_metadata option, and these are formatted based on the internal type as we know what these are meant to be.

Icinga 2 InfluxDB Writer Configuration

Global Configuration

This is my personal configuration, but will be the Data News Blog standard very soon

/**
 * The InfluxdbWriter type writes check result metrics and
 * performance data to an InfluxDB HTTP API
 */

library "perfdata"

object InfluxdbWriter "influxdb" {
  host = "influxdb.angel.net"
  port = 8086
  database = "icinga2"
  host_template = {
    measurement = "$host.check_command$"
    tags = {
      fqdn = "$host.name$"
      hostname = "$host.vars.hostname$"
      domain = "$host.vars.domain$"
    }
  }
  service_template = {
    measurement = "$service.check_command$"
    tags = {
      fqdn = "$host.name$"
      hostname = "$host.vars.hostname$"
      domain = "$host.vars.domain$"
      instance = "$service.vars.instance$"
    }
  }
  enable_send_thresholds = true
}

Host checks use the host_template and service checks use the service_template to determine what tags are added to the data points as they are sent to the InfluxDB server. Most of this is common sense. Using puppet facts I export Icinga 2 host definitions populated with various custom variables set. The InfluxDB writer plug-in creates a data point it interpolates macros like $host.vars.domain$ and retrieves the actual domain name from the host object and sends this as a tag over the wire, furthermore if a macro expansion fails the tag is simply not added.

Dynamic Instance Tagging

I talked in my previous post about applying services to every element of a hash defined in the host variables. Consider the following check

apply Service "mount" for (mount => attributes in host.vars.mounts) {
  import "generic-service"
  check_command = "disk"
  display_name = "mount " + mount
  vars.disk_ereg_path = attributes.path
  vars.instance = mount
  zone = host.name
  assign where host.vars.mounts
}

This iterates over every mount defined in the host.vars.mounts hash and checks that specific instance. Some times you want to know which instance the check was for, more so if the performance data label is the same for all invocations on different resources. This example illustrates setting the service.vars.instance variable for a specific server check instance. The instance macro defined in the InfluxDB writer service template picks up the service variable if defined.

Grafana Dashboards

This is perhaps the most fun section as the results are tangible, and should give you inspiration as to how to craft your tags in Icinga 2 to create useful and well organized dashboards for your own needs. My personal preference is to have machines organized by domain as typically you will have the same host names in different domains.

It makes the whole thing more manageable rather than having one huge list keyed on the fully-qualified domain name. As you can see below the real power comes when we are able to query the schema and work out what mounts there are on a specific system based on the instances we have applied service checks to. This goes for block devices, network interfaces, phys, certificates. The possibilities are endless. Having to create a single graph and then displaying context specific data for particular instance keeps dashboards clean.

 

Templates

Pick a generic measurement which is available on every host to configure your templates for things like host name and domain. To get the variable $domain for example look at the load measurement, then extract all the possible values for the domain tag.

For $hosts perform a similar look-up, extracting all values that exist for the hostname tag, constraining to only the measurements that exist in a particular domain. Later variable queries can consume prior template variables.

The $mount template variable is similar in that we find all values for the instance tag from the disk measurement, but constrained to only measurements for a particular host in a particular domain. The Grafana documentation of the InfluxDB back-end explains the odd looking syntax for constraining the queries.

 

Graphs

Finally we need to create a graph to display the data. The image below depicts how to do this. Simply put we select all data from the diskmeasurement where the domainhostname and instance match the selected template variables. Then finally we are able to select the value field for display.

 

Conclusions

And there you have it! Hopefully I have fostered inspiration. Try it out. Share your experience.

Technology

After the dust settles – Is Max Schrems at the Vanguard of a revolution

As the dust settles after the European Court of Justice (ECJ) ruling in the Max Schrems case last Tuesday (6th October), we are all left wondering what exactly are the implications for the global IT market and the European data centre market in particular.

One thing for is sure: the Snowden revelations still have a long way to play out. People were – and, to a large degree, still are – genuinely uncertain about the degree to which communication was being eavesdropped.

Will that change now a high court judge has found that the US is engaged in the surveillance of European citizens?

Do we care?

While, in the wake of Snowden’s revelations, some consumers moved to encrypted mail services like Lavaboom, Protonmail and Tutanota, they represent only a tiny percentage of internet users. On a personal level there is still a great deal of ambivalence, suggesting the majority of people take the view that if the NSA looks at their data it is not of huge relevance.

But of course, this can’t be translated into the commercial space. Companies hold increasing amounts of data on many, many individuals and one thing the Max Schrems case does clearly illustrate is that not all of their customers are so sanguine about the NSA’s mass surveillance schemes.

‘Frivolous’

Enough people are unhappy enough to make a difference: Max Schrems crowdfunded his legal challenge to the ‘Safe Harbour’ Agreement – the agreement that effectively allowed US tech companies to self-certify their compliance with EU data regulations – and demonstrated the possibilities when individuals and corporations mobilise politically to influence the shape of legislation.

The Irish data commissioner initially rejected Schrems’ complaint (that challenged Facebook’s process of exporting his data to the USA and thereby exposing it to NSA spying) as ‘frivolous’. Now the ECJ ruling has demanded that the Irish courts reconsider his complaint with due diligence, effectively invalidating safe harbour as it did so.

For the moment, national data protection authorities will now have to review each individual case concerning data transfers to the US. Meanwhile, Schrems has started a similar action against Facebook in his native Austria. The ruling opens the doors for more challenges to be lodged with the local supervisory bodies in each member state.

Commercial implications

The major US tech companies report they already have ‘work arounds’ but, of course, these will be open to scrutiny in each European country.

And, since the export of data to the US can no longer be justified under safe harbour, such data exports will require ‘model contract clauses’ to be negotiated which clearly set out the US provider’s privacy obligations.

At first glance, it would seem that there is an opportunity for European IaaS vendors here, as US businesses seek to reconfigure their network architecture so that European customer data stays within Europe. Although, inevitably this will be greeted in some quarters as a step closer to the ‘balkanisation’ of the internet that commentators have long been warning about.

But, as the Microsoft data sovereignty case illustrates, simply holding data in European doesn’t guarantee it is subject to EU data protection standards if you’re using a US provider

So perhaps there is also an opportunity for European SaaS vendors too – if European businesses respond by bringing data back to Europe, then migrating to European service providers could be a potentially less painful and more effective way to ensure EU data protection standards are applied.

Legal implications

The discovery that it is a very large proportion, if not the bulk, of everything that’s being circulated that’s being monitored does change the game quite a lot.

Security agencies are somewhat reluctantly beginning to realise that they have probably gone beyond what was envisioned of the legal frameworks in which they are operating. Technology is rapidly evolving and the law must evolve just as rapidly.

It’s evolving and it’s complicated. And, to some extent, the full implications will only become clear when local privacy and data regulators make their judgements.

General Data Protection Regulation

In the meantime, the EU Council of Ministers is working to get agreement across all 28 member states on the General Data Protection Regulation (GDPR). As a regulation, rather than a directive, it doesn’t require further legislation by national governments to become law and if European leaders can agree on the GDPR’s ‘one stop shop’ then this will have the effect of simplifying the situation across Europe again, although likely with more stringent requirements, and effectively throwing all the dust back up into the air again.

Programming, Technology

Multiple Class Definitions With Puppet

One issue we’ve discovered with running puppet orchestration is bumping into classes being multiply defined. In our setup all hosts get a generic role which among other things contains a definition of the foreman puppet class to manage the configuration file (agent stanza) on all hosts. The problem comes when you include the puppet master role which also pulls in the puppet class.

With hindsight the two roles should have been separated out so that all hosts include puppet::agent and the master(s) include puppet::master. But we are:

  1. Severely time constrained being a start-up
  2. Wish to leverage updates and improvements automatically from the community

As such we just roll with the provided API and have to deal with the fallout. First port of call for a newbie is to try ignore the definition with some conditional code

class profile::puppet {
  if !defined('profile::puppetmaster') {
    class { '::puppet': }
  }
}

or

class profile::puppet {
  if !defined(Class['profile::puppetmaster']) {
    class { '::puppet': }
  }
}

 

Either of which will land you with the same problem of this puppet definition clashing with the one defined in profile::puppetmaster. It’s a common mistake, but one that can be remedied. Oddly enough somehow the second example did work in our production environment, but upon playing about with the pattern to understand its inner workings I just could not recreate! This led to the development of the following. Can’t keep a good academic down even when in the role of sysadmin!

Allowing Multiple Class Definitions In Multiple Locations

Now here is how my compiler background head works. The previous examples rely on the entire manifest being parsed before the defined function can be evaluated, at which point you’re already too late. If however the conditional could be made to be evaluated at file parse time, and if it resolves to false, then why bother parsing the code block?

class profile::puppet {
  if $::fqdn != $::puppetmaster {
    class { '::puppet': }
  }
}

Here we are comparing facts, which are available before every run, and can be evaluated at parse time ($::puppetmaster is provided by foreman). The code works exactly as you’d expect every time regardless of ordering.

Obviously this may not be the official puppet methodology and more than likely dependant on the underlying implementation of the parsing and execution engine. It does provide a quick get out of jail free option for when resource is unavailable to do the job properly.

Programming, Technology

Getting started with OpenStack’s Heat

Introduction

OpenStack’s Heat is the project’s infrastructure orchestration component, and can simplify deploying your project on a cloud platform in a way that’s repeatable and easy to understand.  If you’ve come from the Amazon AWS world then it’s analogous to CloudFormation, and indeed it provides compatibility with this service making migration from AWS to OpenStack a little less painful.  However, Heat has its own templating format and that’s what we’ll walk through today.

This post is a quick tutorial on getting started with your first Heat template, and will deploy a pair of webservers together with loadbalancer as an example.

Heat Templates

Let’s jump right in and take a look at the contents of the template that we’ll use to deploy our infrastructure. These template files are typically formatted in YAML and comprise three main sections:

  • Parameters
  • Resources
  • Outputs

 

heat_template_version: 2014-10-16

description: Demo template to deploy a pair of webservers and a loadbalancer

parameters:
key_name:
type: string
description: Name of SSH keypair to be used for compute instance
flavor:
type: string
default: dc1.1x1.20
constraints:
- allowed_values:
- dc1.1x1.20
- dc1.1x2.20
description: Must be a valid Data News Blog Compute Cloud flavour
image:
type: string
default: 6c3047c6-17b1-4aaf-a657-9229bb481e50
description: Image ID
networks:
type: string
description: Network IDs for which the instances should have an interface attached
default: f77c6fdb-72ad-402f-9f1b-6bf974c3ff77
subnet:
type: string
description: ID for the subnet in which we want to create our loadbalancer
default: a8d1edfe-ac8c-49b0-a5c2-c72fa61decd2
user_data:
type: string
default: |
#cloud-config
packages:
- nginx
name:
type: string
description: Name of instances
default: webserver

resources:
webserver0:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
flavor: { get_param: flavor }
image: { get_param: image }
networks: [{ network: { get_param: networks } }]
user_data: { get_param: user_data }
user_data_format: RAW

webserver1:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
flavor: { get_param: flavor }
image: { get_param: image }
networks: [{ network: { get_param: networks } }]
user_data: { get_param: user_data }
user_data_format: RAW

lb_pool:
type: OS::Neutron::Pool
properties:
protocol: HTTP
subnet_id: { get_param: subnet }
lb_method: ROUND_ROBIN
vip:
protocol_port: 80

lb_members:
type: OS::Neutron::LoadBalancer
properties:
pool_id: { get_resource: lb_pool }
members: [ { get_resource: webserver0 }, { get_resource: webserver1 } ]
protocol_port: 80

outputs:
vip_ip:
description: IP of VIP
value: { get_attr: [ lb_pool, vip, address ] }

 

The first line – heat_template_version: 2014-10-16 – specifies the version of Heat’s templating language we’ll be using, with an expectation that within this template we could be defining resources available up to and including the Juno release.

The first actual section – parameters – let’s us pass in various options as we create our Heat ‘stack’. Most of these are self-explanatory but give our template some flexibility should we need to do some customisation. When we provision our Heat stack there’s a few options we’ll have to specify such as the SSH key name we’ll expect to use with our instances, and various values we can override such as the network we want to attach to, the subnet in which to create our loadbalancer, and so on. Where applicable there’s some sensible defaults in there – in this example the IDs for network and for subnet are taken from my own demonstration project.

The next section – resources – is where most of the actual provisioning magic actually happens. Here we define our two webservers as well as a loadbalancer. Each webserver is of a particular type – OS::Nova::Server – and has various properties passed to it – all of which are retrieved via the get_param intrinsic function. The lb_pool and lb_members resources are similarly created, members in the latter being a list of our webserver resources.

Finally, the outputs section in our example uses another intrinsic function – get_attr – which returns a value from a particular object or resource. In our case this is the IP address of our load-balancer.

Putting it all together

Now that we have our template, we can look at using the heat command-line client to create our stack. Its usage is very straightfoward; Assuming we’ve saved the above template to a file called heatdemo.yaml, to create our stack all we have to do is the following:

$ heat stack-create Webservers --template-file heatdemo.yaml -P key_name=deadline
-P flavor='dc1.1x1.20' -P name=webserver
+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+--------------------+----------------------+
| 433026fc-b543-4104-902f-d335e1ea189d | Webservers | CREATE_IN_PROGRESS | 2015-04-16T15:26:52Z |
+--------------------------------------+------------+--------------------+----------------------+

The stack-create option to the heat command takes various options, such as the template file we’d like to use. We can also inject various parameters using the command-line at this point, and in my example I’m specifying the SSH key name I wish to use as well as the size (flavor) of instance and a name for each machine that’s created. We can check on the stack’s progress as it’s created by looking in Horizon or again using the heat command:

$ heat stack-show Webservers | grep -i status
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |

Looks good so far – let’s take a look in Horizon. Under Project -> Orchestration -> Stacks we see our newly-created ‘Webserver’ stack. Clicking on that gives us a visual representation of its topology:

Clicking on ‘Overview’ summarises the various details for us, and in the ‘Outputs’ section we can see the IP of the VIP that was configured as part of the stack’s creation. Let’s test that everything’s working as it should from another host on the same network:/

$ nova list | grep -i webserv
| efab9c99-ddc1-4cee-abfb-c3756233418e | Webservers-webserver0-ano27iof4iem | ACTIVE | - | Running | private=192.168.2.34 |
| d3eee79d-7ed4-4b27-8512-16cf201f82f3 | Webservers-webserver1-yiaeqoaxrcq5 | ACTIVE | - | Running | private=192.168.2.33 |
$ neutron lb-vip-list
+--------------------------------------+-------------+--------------+----------+----------------+--------+
| id | name | address | protocol | admin_state_up | status |
+--------------------------------------+-------------+--------------+----------+----------------+--------+
| d943c34b-8299-46ad-88e5-6f7d9d26b769 | lb_pool.vip | 192.168.2.32 | HTTP | True | ACTIVE |
+--------------------------------------+-------------+--------------+----------+----------------+--------+
$ ping -c 1 192.168.2.32
PING 192.168.2.32 (192.168.2.32) 56(84) bytes of data.
64 bytes from 192.168.2.32: icmp_seq=1 ttl=63 time=1.34 ms

--- 192.168.2.32 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.348/1.348/1.348/0.000 ms
$ nc -v -w 1 192.168.2.32 80
Connection to 192.168.2.32 80 port [tcp/http] succeeded!
$ curl -s 192.168.2.32:80 | grep -i welcome
<h1>Welcome to nginx!</h1>

Here we’ve verified that there’s two instances launched, that we’ve a loadbalancer and VIP configured, and then we’ve done a couple of basic connectivity tests to make sure the VIP is up and passing traffic to our webservers. The nginx default landing page that we can see from the output of the curl command means everything looks as it should.

Cleaning up

In order to remove our stack and all of its resources, the heat command really couldn’t be any simpler:

$ heat stack-delete Webservers
+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+--------------------+----------------------+
| 433026fc-b543-4104-902f-d335e1ea189d | Webservers | DELETE_IN_PROGRESS | 2015-04-16T15:26:52Z |
+--------------------------------------+------------+--------------------+----------------------+

After a few seconds, another run of heat stack-list will show that the ‘Webservers’ stack no longer exists, nor does any of its resources:

$ nova list | grep -i webserv
zsh: done nova list |
zsh: exit 1 grep -i webserv
$ neutron lb-vip-list

Summary

This example shows how straightforward it can be to orchestrate infrastructure resources on OpenStack using Heat. This is a very basic and limited example – it’s possible to do much, much more with Heat including defining elastic and auto-scaling pieces of infrastructure, but hopefully this provides you with some insight and inspiration into how such a tool can be useful.

Technology

Virtual Private Networks in Openstack

Openstack does provide IPSec VPNaaS which will inevitably be covered in a later blog post, however I wanted to share my experiences with SSL based OpenVPN.

So why am I doing this? Continuous integration and testing is always on my agenda. One thing you quickly learn with puppet in production is modifications tend to get layered upon one another, and work, usually because packages the changes depend on are already present. This doesn’t flex whether dependencies are working from a clean install. I seek to test this by spinning up a virtual reproduction of our infrastructure on a regular basis to combat this and avoid nasty surprises when provisioning new machines. It also allows us to test new software releases in isolation and check that our code works in a completely different root domain. Lots of plus points!

Addressing all of these machines for automation purposes is going to take a lot of public IP addresses. Unfortunately as we are all actutely aware, these are in short supply so I wanted to limit my use to 2, one for the virtual router and one for a VPN gateway onto my test network. Hopefully this is a pattern our clients can copy to avoid using too many of a finite resource, which above and beyond costing a fortune can impact other customers on address starvation. I picked OpenVPN as my tool of choice, many due to familiarity, ubiquity and as an inquisitive young thing wanted to twiddle some knobs on a lazy Saturday morning in bed.

VPN Setup

So first up on the agenda is securing the VPN tunnel with strong encryption, otherwise I’d just be using plain IP tunnelling! The simple way of performing these steps is to download easyrsa which automates a lot of what is covered here, but I shall leave that as a reader exercise.

The following voodoo creates a large prime for Diffie-Hellman key exchange. This allows 2 computers to generate and encode a private one time only number, share them and derive a shared secret known to both parties. Anyone intercepting any of those encoded numbers will be unable to generate the shared secret as you need a private one time secret to calculate it. The cool thing with the shared secret is you can then use it as a symmetric encryption key and commence secure dialogue.

$ openssl dhparam -out dh2048.pem 2048

 

Next up we generate the private key and certificate for the certificate authority. The former you want to keep very safe! Why? The certificate is public, and can be used to encrypt data and send it to a server, the private key is the only thing that can decrypt this data. If the private key is secure then you can guarantee that the only person who can read the message is the intended recipient.

$ openssl req -days 3560 -nodes -new -x509 -keyout ca.key -out ca.crt

 

Next up we create keys and a certificate signing request for the server, then have the CA sign the certificate. The signing process enables one server to trust another as their certificates will have been signed by a common certificate authority.

$ openssl req -days 3560 -nodes -new -keyout server.key -out server.csr 
$ openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 3560

 

Finally create a key and signed certificate for the client:

$ openssl req -days 3560 -nodes -new -keyout client.key -out client.csr 
$ openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 3560

 

Hard bit done we can setup the OpenVPN server after installing create the configuration /etc/openvpn/server.conf with the following and restart the OpenVPN service.

proto udp
dev tun

ca /etc/openvpn/ca.crt
cert /etc/openvpn/server.crt
key /etc/openvpn/server.key
dh /etc/openvpn/dh2048.pem

server 192.168.96.0 255.255.255.0
push "route 172.16.0.0 255.255.0.0"

keepalive 10 120
comp-lzo
persist-key
persist-tun
verb 3

 

Bit of explanation as to the settings. The first group of options specify that we will be communicating via unreliable (but fast) UDP, and we will be using a tunnel device to communicate i.e. L3 packets will be sent and received. Next up comes the paths to the keys and certificates we just created then the block defining the networking magic. The server option will allocate tunnel endpoint addresses out of the 192.168.96.0/24 address range (unlikely to clash with wifi allocated addresses when roaming with my laptop), and will advertise the 172.16.0.0/16 route to all clients. This is the internal network address block of my openstack tenant which I want to access from my laptop. And that’s it easy?

Next up setup the client endpoint, much of which is self explanatory, suffice to say remote is the public IP address of my VPN endpoint.

client
proto udp
dev tun
remote 85.199.252.151
nobind
persist-key
persist-tun
ca /home/simon/ca.crt
cert /home/simon/client.crt
key /home/simon/client.key
comp-lzo
verb 3

 

Firing up the client process, works as expected the tunnel device is allocated out of the correct address pool

1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:58:aa:49 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe58:aa49/64 scope link 
       valid_lft forever preferred_lft forever
26: tun0: <pointopoint,multicast,noarp,up,lower_up> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100
    link/none 
    inet 192.168.96.6 peer 192.168.96.5/32 scope global tun0
       valid_lft forever preferred_lft forever
</pointopoint,multicast,noarp,up,lower_up></broadcast,multicast,up,lower_up></loopback,up,lower_up>

 

the correct routes are added

default via 192.168.0.254 dev eth0 
172.16.0.0/16 via 192.168.96.5 dev tun0 
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.1 
192.168.96.1 via 192.168.96.5 dev tun0 
192.168.96.5 dev tun0  proto kernel  scope link  src 192.168.96.6 

 

and I can ping the VPN endpoint’s private IP address, success!

PING 172.16.0.16 (172.16.0.16) 56(84) bytes of data.
64 bytes from 172.16.0.16: icmp_seq=1 ttl=64 time=1.24 ms

--- 172.16.0.16 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.245/1.245/1.245/0.000 ms

 

Firewall and Routing

But that success is short lived. One aspect with our deployment of OpenStack is the default networking filters. These feature rules that specify that IP packets leaving a virtual machine must only come from that machine. Makes sense that you don’t want some operator impersonating another virtual machine, however it makes routing impossible. In this case if I want to ping another machine via the VPN gateway that ICMP request will need to be routed by the VPN server to another box on the network. As the source address of this packet is 192.168.96.6 (the ping reply will be destined for this address), the packet gets filtered as soon as it leaves the VM, because it isn’t from 172.16.0.16. Additionally you’d need to advertise a route back to 192.168.96.0/24 for the reply which is another added complexity.

Enter source network address translation. On the server we can specify that any packets routed out of the VM with a different source address are altered to look like they originated on the server and bypassing the security filters. Awesome. When packets are returned from the other machine on the private network the VPN server is then responsible for translating the destination back to the original sender and forwarding on. How it does this is beyond the scope of this post! Here are my firewall rules:

Chain INPUT (policy DROP 3141 packets, 265K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 162K   26M ACCEPT     all  --  any    any     anywhere             anywhere             state RELATED,ESTABLISHED
10759  631K ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:ssh
    3   126 ACCEPT     udp  --  any    any     anywhere             anywhere             udp dpt:openvpn
   29  1456 ACCEPT     icmp --  any    any     anywhere             anywhere            

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  tun+   any     anywhere             anywhere            
    0     0 ACCEPT     all  --  any    any     172.16.0.0/16        anywhere            

Chain OUTPUT (policy ACCEPT 186K packets, 36M bytes)
 pkts bytes target     prot opt in     out     source               destination

 

Importantly we accept inbound openvpn trafic, or else the tunnel wouldn’t be able to be established, and allow the forwarding of any packets coming out of a VPN tunnel device, and any packets originating within the trusted private network. My routing rules look like the following:

Chain PREROUTING (policy ACCEPT 13939 packets, 899K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 10793 packets, 633K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 27630 packets, 2320K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
27630 2320K MASQUERADE  all  --  any    eth0    anywhere             anywhere

 

Which applies the SNAT previously described to packets originating from the VPN tunnel. And that’s it, now I can get access to 65000 virtual machines with just a pair of public IP addresses.

Technology

What will drive the Northern Powerhouse?

Infrastructure isn’t just about transport – we need digital connectivity for future growth

What does business really require to deliver growth? What investment needs to go into a region in order to ensure future commercial and economic success? Are businesses, investors and Government really switched on to technological infrastructure requirements?

On the 24th February, I attended a North West Futures breakfast at Manchester Airport to address exactly these questions. I was part of a panel looking at transport and connectivity and how that needs to evolve under Greater Manchester’s devolved regional government in order to put in place the infrastructure needed to support the creation of a ‘Northern Powerhouse’.

For those that aren’t closely connected to the city: Manchester has been thrust to the forefront of the Government’s strategy of regional devolution and should have its own directly elected Mayor by 2017. The deal will give the Greater Manchester Combined Authority (GMCA) greater power over transport, housing, planning and public service reform as well as bring significant investment to the area in order to ‘maximise the economic potential of the North’.

At the breakfast, many of my fellow panellists, Jon Lamonte, Mike Blackburn and Eamonn Boylan, called for a long view of investment in the region. Jon Lamonte, Chief Executive at Transport for Greater Manchester: “We want to know what business wants, so we are delivering the right schemes over the next 20 years. Business knows more about what it needs to survive than anybody else.. What do the ports and airports need from connectivity? What do manufacturing businesses need? Where should we be making those investments?”

We need to encourage a creative eco-system of pro-Northern groups, individuals and businesses across the public and private sector which can learn from each other, share skills and resources, shape the development and inward investment and capitalise on the disruptive growth that follows.

Manchester has a history of innovative capability and, especially in the nineteenth century, the confidence to act on it. But today’s economy requires a very different infrastructure than that of the nineteenth century. Yes, transport infrastructure is essential to the development of the region but, in today’s internet economy, spurring stronger business growth in the North also requires its great cities to enjoy far better digital connections.

Because the Internet is ubiquitous, it’s easy to assume it’s just “there”. But every piece of the Internet, every server providing services, has been bought, located and connected in advance of the provision of any service in anticipation of demand. There is no reason why cities such as Manchester or Leeds should not be seen as the ‘go-to’ locations for innovative enterprise technology. But to achieve this requires an understanding of the digital investment that will be required.

Central to any digital-age Northern Powerhouse must be its digital infrastructure.

I’m also very acutely aware that some of the digital and network initiatives that were undertaken by Manchester City Council and the City’s Universities in the late ‘80s and early ‘90s, led to the concentration of networks around Manchester Science Park which were so vital to the initial development of the Internet companies that I was involved in then.

However, it isn’t simply the provision of the underlying network that is important. At the North West Futures event, I was fortunate that Mike Blackburn, Chair of the GM LEP and the Vice President, Strategy & Planning, Government & Health, at BT Global Services was on the panel. He reminded me and the audience of very significant investments that have been made in the region largely by the private sector, and by BT in particular, to provide access to high speed broadband. This has already achieved in excess of 90% population coverage. Mike pointed out, however, that the take up of these high speed services remains very low.

It is too simple to say that all we need is basic infrastructure. Actually we need underlying communications infrastructure and rich services provided over this underlying infrastructure. Whether that is retail services to consumers or business services, this is the crucial next step in a really vibrant digital economy. That is very much a challenge to private sector companies: to innovate in services. Eamonn Boylan, Chief Executive of Stockport Council, reminded us, that the support of the public sector for business in the region, with consistent policy over many years, as well as the recent transfer of direct funding to GMCA, gives a challenge to the private sector to make the most of very fertile ground.

Another point that was raised at the breakfast meeting last week by Taylor Wimpey UK Director of Planning, Jennie Daly is worth recounting. She’s anxious that we “don’t get to the boundary of Greater Manchester and fall off a cliff.”

Manchester will require connectivity and richness of service, not only internationally and with established UK tech centres such as London and Cambridge, but across the Northern communities to ensure that the whole region benefits from the Northern Powerhouse’s economic success, and that the critical mass feeds back positively.

Technology

A Death Knell for the File

Why the Old Metaphors Don’t Work Anymore

I’ve already talked about how we need to change the way we think about storage. Partly because we are creating disunified silos of information on SAN and NAS distributed around the enterprise. Partly because increasing disk capacity is creating performance, redundancy and backup headaches. But this rethink is also being driven by another factor: the way we access data is changing.

A Death Knell for the File

One of the most common objections I hear in response to storage metaphors like object is that users really need files, with their presentation paradigms typically being about modification times and permissions. But the way we interact with data today means that the ‘file’ is an increasingly outdated metaphor. Today, we interact with data through applications, not as files. The file paradigm is an increasingly unnecessary intermediate step; a legacy of how computer technology has evolved. Today’s generation of apps are focused on making our data meaningful before presenting it to us. We are far more engaged with visual rather than numerical information – and users in the future will interact with pre-processed information through analytic systems or other applications.

Not a Bucket but a Pipeline

The possibilities of pre-processing information for users is waking people up to the idea that there is a lot of unused processing power in a storage system. Object stores are ideally placed for converting or processing raw data because they are built out of general purpose servers. This is blurring the lines between storage and computation and presents a different paradigm. Storage is no longer something that we can view as a bucket into which we dump stuff. Storage can be seen as a pipeline through which data moves. It’s a more functional and user-centric vision of what storage can be for users than the traditional bucket metaphor. The open-source object-storage system Ceph is architected with this kind of pre-processing in mind. Ceph offers many hooks in the code for users to write custom plugins to execute on write or read. This kind of pre-processing is already happening in many industries: adding watermarks to images on ingest in the media industry; or first stage analysis of survey data in the oil and gas industries.

Creating New Metaphors

If the problem of information silos within the enterprise and disk performance, redundancy and backup headaches are the negative drivers forcing us to rethink the way we approach storage, then the new ways we access data and the opportunities for pre-processing stored data are the positive forces which are impacting on the way we should think about data storage. And in response to these positive forces, too, object storage is the most obvious and commonsense answer.

Technology

Is your API open today?

I saw a story last week which caught my eye about LinkedIn restricting its API to only a small number of approved partners and to a limited set of cases. They have legitimate commercial reasons in wishing to do this in seeking to drive traffic to their site rather than to their perceived competition Salesforce and Microsoft. Twitter, as mentioned in the article had done something similar a few years back.

Now one might pass this over as normal commercial behaviour and the usual rough and tumble between the major Internet properties. However, if you are a small company depending on this API for a feature in your application, this will have created a problem for you. Even more critical, if you are a small development company who has specialised in building applications around this API you are in even more trouble. As someone put it “don’t bet your business on a proprietary API from a large vendor”.

Now in the Cloud world there are also both Open APIs and proprietary APIs out there (most notably AWS) and whilst it would be hard to see someone like Amazon restricting access to its API without significant commercial impact, it is fully within its rights to do so or to make significant changes to it as it sees fit. As indicated, not a probable scenario. More interesting though is the implication within the story that this has resulted in concerns by LinkedIn of traffic flow and customer use to its competitors as the reason for shutting the drawbridge. This is certainly emulated in the Cloud world where different providers may support different APIs and interoperability is not so common place. Most large public Cloud providers want to get your data in and keep it there rather than make it easy for you to port that data between infrastructure and application Clouds.

It was interesting to read this further article highlighting the benefits that Open APIs can bring to the creation of Digital Services within Government.

At DataCentred we have chosen to work with OpenStack because we believe in open APIs and data portability under user control. So if you want to keep control of where your data is and retain control of it we think that using OpenStack and providers who provide full OpenStack API access is a an excellent idea. The alternative is replicating data in a number of silo’d repository with restrictions on access and interoperability.

close
Start typing to see posts you are looking for.
Scroll To Top