Sunday, 31 January 2016

How to stress test CouchDB with Gatling

In this blog post, I am going to show you how to use Gatling (http://gatling.io/) to test the performance of CouchDB's HTTP operations.

Gatling is an HTTP performance tool written in Scala with the Akka and Netty toolkits.Via Akka and Netty, Gatling can generate a large amount of load, while making efficient use of threads (those libraries are all based on non-blocking I/O). It is a JVM application, therefore, you can run it on a variety of platforms without a problem.

Gatling provides you with a Domain Specific Language for writing simulations to generate load on (among other things) HTTP-based services such as CouchDB. The DSL is written in Scala and provides a fluent interface. It provides convenience mechanisms for triggering HTTP methods (GET, PUT, POST, DELETE), as well as simulating wait times and "arrival" times of user.

I've put together a sample Maven project which illustrates one approach for running Gatling stress tests via the command line. Have a look at: https://github.com/kafecho/gatling-couchdb-simulations 
You only need Apache Maven and a JDK to run the sample. Gatling embeds a version of the Scala compiler so you don't need to have one running.

The sample project contains a single simulation which does the following against a CouchDB server:

  • Each synthetic user creates a database
  • Each synthetic user proceeds to add a number of JSON docs to that database
  • Once documents have been added, each synthetic user deletes the database
You fill find further instruction on the Github page on how to run the simulation.

Gatling produces nice charts with details about response time and throughput (even for specific HTTP operations) so it gives you and idea of what the performance and bottlenecks are.

Here are some examples:


Previously, I had used similar simulations to stress-test BigCouch clusters and the code can be reused out of the box to test CouchDB 2.0 clusters. I've also shared some Ansible recipes to make the deployment of CouchDB 2.0 a bit easier. Have a look at https://github.com/kafecho/ansible-couchdb2





Friday, 27 November 2015

Ansible: the installer pattern

Ansible: the installer pattern.


In my day job, I write software systems that help media companies (think BBC, ESPN) put together content (think news) that will be broacast to air.

Over the years, software has morphed to become ever more sophisticated with ever more moving parts. We are no longer configuring a single executable that run on a Windows PC, we are producing systems that have to be correctly installed, configured and monitored.

I've been investing a lot of effort with Automate-all-the-things Ansible and today, we have a rather extensive set of roles for deploying pretty much all we need, from JVMs all the way to HA clusters and monitoring tools.

A newsroom environment (as made popular by Aaron Sorkin's The Newsroom TV series) is a fairly controlled space. During a visit to the BBC production facility back in May this year, I had to put my laptop's power supply through a set of tests (in case it caused interferences with the broadcast signals). Likewise, there were no microwave ovens in sight in the lunch area (again, because they could issues and fry live radio transmissions). To add to the fun, in this production environment, machines tend to be locked down; they are often configured to block access to the outside world (the Internet). So your vanilla Ansible roles which download stuff from the internet won't fly.

The solution we've settled for is what I call the Ansible installer pattern. In this instance, the installer is a Linux node (I use CentOS) which contains pre-cached dependencies. The installer lives within the customer's Intranet. It runs Ansible with an Apache web server. At deployment time, the installer uses roles which instructs the target nodes to go and fetch rpms and other files from its web server (the so-called phone home pattern).

I've experimented with various approaches to build the installer (all with Ansible):

  • as a complete OS image (built with Vagrant / VirtualBox)
  • as a docker container
  • as a self contained tar file that can be unpacked and used on any vanilla CentOS machine.

We eventually settled for the last option. In the remainder of this post, I will describe how it works. 

To build the installer tar file, we launch a vanilla CentOS virtual machine (think of it as a golden installer) which we configure with Ansible. Ansible creates the cache folder and then runs a bunch of roles to cache various dependencies. In addition, Ansible fetches the packages to install Apache and the packages to install itself (so the system can be bootstraped). At the end; we are left with a folder,a bunch of scripts and some Ansible playbooks used for deployment. The folder is then archived and zipped using tar. The tar archive can be copied to any plain CentOS node that can then be used as an installer. Building the tar file is fully automated using Jenkins and Ansible.

To make things more concrete, let's say you want to deploy the HAProxy load balancer, with this approach, you essentially have to write 2 roles. The purpose of the 1st role is to cache all the dependencies that are required to install HAProxy. This role is executed when the installer tar file is built. It looks like this:



The variable rpms_root point to a fixed location (i.e. the cache).

At deployment time, we use a different role that installs HAProxy from the rpms that have been cached. The role looks just like what you would write if the node had access to the Internet, except if uses a special purpose yum repository (here called ansible).



At deployment time, the installer adds the special ansible yum repository on all the target nodes. The yum repo is derived from a template that looks like this:



In other words, at deployment time, each node knows where to go to find the RPM to install. deployment_node_ip points to the IP address of the installer node.

I've tested this approach with CentOS nodes, but it should work (with some changes obviously, pick your favourite package manager) on other Linux distributions. I've also tested caching and deploying Windows applications. It works quite well, but extra steps are required to download all the dependencies required to setup WinRM connections.

An obvious benefit of the installer pattern is that it makes deployments quite a bit faster since everything is pre-cached. If you have to install a large package on a set of nodes (for example, a SolrCloud cluster) then the speedups can be quite substantial. If you have to deploy a full stack (from app to monitoring on multiple nodes), then even more so. From start to finish, our complex stack takes about 7 minutes to deploy and configure. The obvious downside of the approach is that it forces you to split your deployment into 2 stages (cache and install). Testing also requires a bit more rigour as you really need to ensure that your target machines are cut of from the Internet (otherwise you might be testing the wrong thing). For that I use Vagrant to launch clusters using private networks configured with no Internet access.

Most people nowadays are lucky enough to host things in the Cloud or on their own infrastructure. If you are not, and if tightly controlled environments are an issue for you, I am hoping that you've found this post useful.

As always, comments and questions are welcome.

Guillaume.

Wednesday, 7 January 2015

Ansible and Windows

I've been tasked to look at various configuration management solutions to setup and manage Windows machine.

As I already use Ansible in quite a few places on Linux, I decided to try the relatively new Windows support (as of 1.7). I set myself a fairly simple task, that is installing and configuring CouchDB from scratch.I later added the ability to install Microsoft Silverlight.

Setting up the Windows environment was fairly straightforward, I just followed the instructions from the Ansible website and I was able to setup my Windows7 box with WinRM.

Ansible has managed to keep the same operating model on both Windows and Linux: it's all agent-less, you can invoke ad-hoc commands on both Windows and Linux, and even write Playbooks where some sections take place on Windows while others on Linux. The list of Windows specific modules is a bit small (but hopefully growing).

For my particular job, I needed to tweak the CouchDB configuration once it is installed. Normally on Linux, I would use the template module, which is not yet available on Windows. I eventually settled for a hybrid option, where I apply the template locally on the Linux 'master' node and then I instruct the Windows box to fetch the templated content via http (I run an HTTP server on my Ansible 'master' node).

Here is what the Playbook looks like: I do find the Windows philosophy for installing software very weird (it mostly assumes a GUI and someone clicking on things to proceed), but thankfully, I was able to work out the steps to install things silently.

Overall, I am quite happy with the result. The next version of Ansible (1.9) should have improved Windows support with extra modules for doing file copy and dealing with templates.

Good job to the Ansible team for keeping it very simple.

Tuesday, 15 July 2014

Continuous delivery with Ansible and Jenkins

For the past few weeks, I've been working on a Continuous Delivery pipeline (a la Jez Humble) for a software based broadcast system that we are building at Quantel.

Ansible has quickly become a crucial piece of technology to make this possible. The pipeline is far from trivial. Compiling code and running unit tests is fairly easy but things start to get complicated very quickly when it comes to running integration tests, acceptance tests and performance tests. Tests need the right environment to run in (the stack is essentially a set of distributed systems with 3rd party dependencies like CouchDB or Solr). We also want to deploy to a Linux clusters, to Windows machines and for demos and manual tests purposes, we want to build Vagrant images or Docker containers so that people can try the software on their own machine.

From day 1, I decided to use Ansible to automate everything that could be automated. The 1st rule was as follow: if I had to do something more than once, it would have to be automated. The 2nd rule was a follow: if possible, don't SSH directly into the host, but instead use Ansible to run SSH commands. That way, ensuring that whatever I do is always captured into a playbook which can be reliably replayed ad infinitum.

So far, I am managing the following things with Ansible.

We use Jenkins as the build server and setting up Jenkins from scratch is taken care by an Ansible playbook. I have fairly simple Ansible script which installs precise versions of the JDK, Maven, Git, etc, etc.. All the things you need to build. User accounts and SSH permissions are also taken care of. If for some reason the Jenkins were to die, I could easily rebuild it. 
 
I've also written an Ansible playbook to configure Linux Jenkins slaves. This takes care of creating the correct users, manage SSH keys, etc... So far I have setup about 10 Jenkins slaves and it was very easy.

When the code is built, I use Ansible to deploy a given version of the stack on a virtual machine which is itself managed by Ansible (via Vagrant). Ansible takes care of installing Java and the right dependencies so that integration or performance tests can run. A similar version of the playbook can be used to deploy the software stack to a CentOS staging environment.

Some of the tests that we run have complicated dependencies. For example, in one of the tests, we process RTP video streams and extract subtitles and timecode information and assert that they are present and correct. For that, we use the Multicat tool which is rather straight forward to turn into a playbook.

Steve Jobs could have well said "there is a playbook for that". The ease with which you can turn written instructions into a working playbook that is straight forward to read and maintain is one of the things I like best about Ansible.

The best thing about all this is that I manage all this setup from my house controlling servers 9000 kilometres away. 

Monday, 7 April 2014

Introduction to Spray.io at JoziHub.

Last night (07/04/2014), I gave an introduction to the Spray framework to the Scala Johannesburg User Group. For this event, we returned to JoziHub where Scala-Jozi started in July 2012.

The presentation was half slides and half code samples.

The event last night was the busiest meetup we've had so far with about 20 people. The talk was well received and I had plenty of questions. I specially liked the ones I could not answer, since that means I have to do more homework to make the presentation even better. 

When I started Scala-Jozi back in July 2012, there was not an awful lot of Scala going on in Joburg (or in SA). Fast forward to yesterday, and things have changed quite a bit with more and more people using Scala in their day to day job, including a top secret startup using Scala, Akka and Spray. 

Many thanks again to JoziHub for letting us use their facility. We all really like the space and the facilities it offers (very nice projector, super fast Internet, easy access). 

Next month, Andreas will be giving an Introduction to Functional Programming, and we will be looking at how languages such as Haskell approach FP.



Tuesday, 14 January 2014

My experience with LittleBits

After reading quite a few good things about the tech (and following some recommendations from Twitter), I purchased a LittleBits Starter Kit directly from the LittleBits web site. The item was delivered via UPS very quickly (about 3 days). Besides the shipping costs, I also had to pay some import tax when the item was delivered to my home in Johannesburg.

I bought the kit for my son who is 5 years old in order to teach him the basics of electronics. The box recommends a minimum age of 8, but we had a go anyway. The kit is extremely easy to use, and the concepts of components that you snap together using magnets is pure brillance.

My son had fun playing with the components one by one (the buzzer in particular made him giggle a lot, the light controlled buzzer even more). The box comes with a booklet which allows you to make electronics-powered craft. Within the 1st day, we had built together a torch with a switch. Later on, this was improved and turned into a torch which can light itself up at night with a buzzer which indicates how dark it is. My 3 year old also had a go and was able to snap together components (it seemed a bit random, but the end result worked). Again, they were quite keen on the buzzer. 

The motor component has an attachement that is LEGO compatible so you can also make simple moving LEGO activated by light or by a switch (so we can reuse the ton of blocks we have). 

LittleBits is one of the things I wish I had as a kid, and could be very useful in classrooms to teach kids the basic of electronics. It is very easy to use.

The box is a bit pricey but given the sheer amount of stuff you can build with it, I think it is well worth it.

Here is a little component which reports how dark a room is. It is made of a power component, a light sensor and an LED array. Super easy and quick to put together.


Thursday, 24 October 2013

Using Akka for video editing.

I am super lucky to work for Quantel a UK-based company which makes high-end video-editing hardware and software for post-production studios and TV broadcasters.

Our equipment has been used to edit many movies over the years (Avatar, Lord of the Rings) and is used by major broadcasters throughout the world (the BBC, BSkyB, Fox, etc..).

Quantel's broadcast systems roughly follow a client/server architecture. A server is a beefy machine which can ingest a large number of high definition feeds very quickly and make them available via speedy network connections to video editing workstations. For reason of speed and efficiency, the server is a blend of custom hardware (DSP, FPGA), custom operating system, custom network stack, etc, etc.. Video editing workstations are mostly C++ again because they have to move and display things very quickly.

At Quantel, I work in the broadcast domain, mostly building JVM RESTful web services to help our video editing workstations with metadata level workflows (find video clips with certain properties, organise assets so they can be found).

Although a lot of the current stack uses custom hardware and proprietary software, there has been a trend in the industry to produce solutions based on off the shelf operating systems, industry storage and standard video formats. To that end Quantel has started a new product architecture called RevolutionQ ( more information here) which is based around a video editing standard called AS02.

Unlike the existing Server technology, the new stack relies heavily on the Scala, Akka and the JVM for doing some of the video processing.

We are building Scamp, a video processing framework out of Akka which works in a similar fashion to GStreamer, but is entirely based on Actors and runs on the JVM. We've used Actors to implement video processing pipelines where each stage is a standalone media processing operation such as extracting H264 video access units, manipulating AAC audio blocks, AC3 data, timecode, metadata, etc, etc.. Most of what we do does not require transcoding (I think this would be too costly), we mostly do "transwrapping" which is the process of extracting bits from one format and wrapping it in a different format.

Currently we are using Scamp to build a video recorder which can record a large number of satellite feeds with high-bitrate content (over 50 Mb/s) and make that content available for near-live video editing via HTTP. The recorder works in a similar way to a PVR. It receives a multiplexed signal (MPEG2 Transport Stream) which is a series of small packets. We use a chain of actors to re-combine packets into their corresponding tracks (video track, audio track) and actors to persist video and audio data to storage. The framework is quite flexible as you essentially plug-in actors together so you can further process the video and audio essence for long-term storage or re-streaming via HTTP or other protocols. So for example, you could build a system which records a satellite feed and make the content available straight away via HTTP live streaming or MPEG DASH.

But more importantly, the system is built to be non-blocking and purely reactive. As Akka takes care of threading and memory management, we don't have to worry about that and the system scales up very well (we run on 24 cores machines and Akka has no trouble spreading workloads around). The code is also very immutable thanks to the use of the ByteString structure. In my experience, immutability makes it very easy to reason about the code. If you've looked at the source code of existing C++ video frameworks like VLC or some of the Intel video processing libraries, you will find a lot of rather hairy threading / locking / shared memory code which is rather hard to write properly.

I appreciate we are not writing code which can run as fast as its C++ equivalent, but we can write the code faster with the confidence that it will auto-scale because the framework takes care of that.

Akka has many interesting features. The most recent ones I've been experimenting with are FSM for representing state machines. There is an industry standard for modelling a processing job which captures video (think of it as if someone standardised the protocol of a PVR). A job can be in different states, and based on the state can perform certain actions. It turns out Akka FSM are a very elegant way of modelling this in very few lines of code.

Another interesting feature is the new Akka I/O layer and the use of pipelines for encoding and decoding. A lot of trans wrapping is just that, extracting binary data from an envelope, combining it with more binary data into a different structure and so on. For example, when you demultiplex an RTP stream, you build the following chain: UDP -> RTP -> TS packets -> PES Packets. The new Akka pipeline approach (which comes from Spray) makes it very easy to build typed, composable and CPU efficient chains like those.

And finally, I am also looking at using Spray (soon to be Akka-Http) to build web interface, so potentially we will have a RESTful way of controlling the recording of a lot of streams at the same time.

Towards a giant PVR.