tag:blogger.com,1999:blog-19368596836999972382024-03-05T21:32:16.384-08:00KafèchoAnonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-1936859683699997238.post-47809603895777526902016-01-31T23:05:00.000-08:002016-01-31T23:05:41.549-08:00How to stress test CouchDB with GatlingIn this blog post, I am going to show you how to use Gatling (http://gatling.io/) to test the performance of CouchDB's HTTP operations.<br />
<br />
Gatling is an HTTP performance tool written in <a href="http://www.scala-lang.org/">Scala</a> with the <a href="http://akka.io/)">Akka</a> and <a href="http://netty.io/">Ne<span id="goog_1498573933"></span><span id="goog_1498573934"></span>tty</a> toolkits.Via Akka and Netty, Gatling can generate a large amount of load, while making efficient use of threads (those libraries are all based on non-blocking I/O). It is a JVM application, therefore, you can run it on a variety of platforms without a problem.<br />
<br />
Gatling provides you with a Domain Specific Language for writing simulations to generate load on (among other things) HTTP-based services such as CouchDB. The DSL is written in Scala and provides a fluent interface. It provides convenience mechanisms for triggering HTTP methods (GET, PUT, POST, DELETE), as well as simulating wait times and "arrival" times of user.<br />
<br />
I've put together a sample Maven project which illustrates one approach for running Gatling stress tests via the command line. Have a look at: <a href="https://github.com/kafecho/gatling-couchdb-simulations">https://github.com/kafecho/gatling-couchdb-simulations</a>
<br />
You only need Apache Maven and a JDK to run the sample. Gatling embeds a version of the Scala compiler so you don't need to have one running.<br />
<br />
The sample project contains a single simulation which does the following against a CouchDB server:
<br />
<br />
<ul>
<li>Each synthetic user creates a database</li>
<li>Each synthetic user proceeds to add a number of JSON docs to that database</li>
<li>Once documents have been added, each synthetic user deletes the database</li>
</ul>
You fill find further instruction on the Github page on how to run the simulation.<br />
<br />
Gatling produces nice charts with details about response time and throughput (even for specific HTTP operations) so it gives you and idea of what the performance and bottlenecks are.<br />
<br />
Here are some examples:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjU97rgkEWFnJw-JwCE6PChiwdt_WGq2E6CM8ox2EfzYl0QceKF0fBlacj10CKmIHsjbblpRronBzJ9pQNwhd14b-6s2EjpLm9plNUKlH6juf-usPE40GVC8yfnBApzwGrdH66nnSxDe98T/s1600/Screen+Shot+2016-02-01+at+08.53.24.png" imageanchor="1"></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPOnQGwmDO4rFWDFZycLkM2OiBvDPFFaE6iiHyQ_6GrdZa9TGIDApX5tJdguyZyNR8VNi-wpqfKWtWu5G91A65hEam96UvDRmxlB5FVjMAG8XG9PAZeSyxLG543rGZ5_OG9tVlT51E7-hG/s1600/Screen+Shot+2016-02-01+at+08.53.42.png" imageanchor="1"></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjksYaIhVrK677tJ5H0El1NpLh5MStCObXQRX0a24V2-vSd04pEsRMNePec8dns4fVYpSK-UBwInpF3ZW4D-_sqz8G7xgx-dCeV2k_DtA21SOTVkEg58Dl3oMcrPcoWDuXnxQFwO330SNtL/s1600/Screen+Shot+2016-02-01+at+08.54.01.png" imageanchor="1"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjksYaIhVrK677tJ5H0El1NpLh5MStCObXQRX0a24V2-vSd04pEsRMNePec8dns4fVYpSK-UBwInpF3ZW4D-_sqz8G7xgx-dCeV2k_DtA21SOTVkEg58Dl3oMcrPcoWDuXnxQFwO330SNtL/s400/Screen+Shot+2016-02-01+at+08.54.01.png" width="393" /></a><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPOnQGwmDO4rFWDFZycLkM2OiBvDPFFaE6iiHyQ_6GrdZa9TGIDApX5tJdguyZyNR8VNi-wpqfKWtWu5G91A65hEam96UvDRmxlB5FVjMAG8XG9PAZeSyxLG543rGZ5_OG9tVlT51E7-hG/s400/Screen+Shot+2016-02-01+at+08.53.42.png" width="393" /><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjU97rgkEWFnJw-JwCE6PChiwdt_WGq2E6CM8ox2EfzYl0QceKF0fBlacj10CKmIHsjbblpRronBzJ9pQNwhd14b-6s2EjpLm9plNUKlH6juf-usPE40GVC8yfnBApzwGrdH66nnSxDe98T/s400/Screen+Shot+2016-02-01+at+08.53.24.png" width="393" /><br />
Previously, I had used similar simulations to stress-test BigCouch clusters and the code can be reused out of the box to test CouchDB 2.0 clusters. I've also shared some Ansible recipes to make the deployment of CouchDB 2.0 a bit easier. Have a look at <a href="https://github.com/kafecho/ansible-couchdb2">https://github.com/kafecho/ansible-couchdb2</a><br />
<br />
<br />
<br />
<br />
<br />
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com2tag:blogger.com,1999:blog-1936859683699997238.post-35055234968430258802015-11-27T01:08:00.002-08:002015-11-27T01:08:28.502-08:00Ansible: the installer pattern<h2>
Ansible: the installer pattern.</h2>
<br />
In my day job, I write software systems that help media companies (think BBC, ESPN) put together content (think news) that will be broacast to air.<br />
<br />
Over the years, software has morphed to become ever more sophisticated with ever more moving parts. We are no longer configuring a single executable that run on a Windows PC, we are producing systems that have to be correctly installed, configured and monitored.<br />
<br />
I've been investing a lot of effort with Automate-all-the-things Ansible and today, we have a rather extensive set of roles for deploying pretty much all we need, from JVMs all the way to HA clusters and monitoring tools.<br />
<br />
A newsroom environment (as made popular by Aaron Sorkin's The Newsroom TV series) is a fairly controlled space. During a visit to the BBC production facility back in May this year, I had to put my laptop's power supply through a set of tests (in case it caused interferences with the broadcast signals).
Likewise, there were no microwave ovens in sight in the lunch area (again, because they could issues and fry live radio transmissions). To add to the fun, in this production environment, machines tend to be locked down; they are often configured to block access to the outside world (the Internet). So your vanilla Ansible roles which download stuff from the internet won't fly.<br />
<br />
The solution we've settled for is what I call the <b>Ansible installer pattern</b>.
In this instance, the installer is a Linux node (I use CentOS) which contains pre-cached dependencies. The installer lives within the customer's Intranet. It runs Ansible with an Apache web server. At deployment time, the installer uses roles which instructs the target nodes to go and fetch rpms and other files from its web server (the so-called <b>phone home pattern</b>).<br />
<br />
I've experimented with various approaches to build the installer (all with Ansible):<br />
<br />
<ul>
<li>as a complete OS image (built with Vagrant / VirtualBox)</li>
<li>as a docker container</li>
<li>as a self contained tar file that can be unpacked and used on any vanilla CentOS machine.</li>
</ul>
<br />We eventually settled for the last option. In the remainder of this post, I will describe how it works.
<br />
<br />
To build the installer tar file, we launch a vanilla CentOS virtual machine (think of it as a golden installer) which we configure with Ansible. Ansible creates the cache folder and then runs a bunch of roles to cache various dependencies. In addition, Ansible fetches the packages to install Apache and the packages to install itself (so the system can be bootstraped). At the end; we are left with a folder,a bunch of scripts and some Ansible playbooks used for deployment. The folder is then archived and zipped using tar. The tar archive can be copied to any plain CentOS node that can then be used as an installer. Building the tar file is fully automated using Jenkins and Ansible.<br />
<br />
To make things more concrete, let's say you want to deploy the HAProxy load balancer, with this approach, you essentially have to write 2 roles. The purpose of the 1st role is to cache all the dependencies that are required to install HAProxy. This role is executed when the installer tar file is built. It looks like this:
<br />
<br />
<script src="https://gist.github.com/kafecho/4480707ab061dce352c2.js"></script>
<br />
<br />
The variable rpms_root point to a fixed location (i.e. the cache).
<br />
<br />
At deployment time, we use a different role that installs HAProxy from the rpms that have been cached.
The role looks just like what you would write if the node had access to the Internet, except if uses a special purpose yum repository (here called ansible).
<br />
<br />
<script src="https://gist.github.com/kafecho/2cb2270edf099816221b.js"></script>
<br />
<br />
At deployment time, the installer adds the special ansible yum repository on all the target nodes. The yum repo is derived from a template that looks like this:
<br />
<br />
<script src="https://gist.github.com/kafecho/98a067c0bf3c6a08f92a.js"></script>
<br />
<br />
In other words, at deployment time, each node knows where to go to find the RPM to install. deployment_node_ip points to the IP address of the installer node.
<br />
<br />
I've tested this approach with CentOS nodes, but it should work (with some changes obviously, pick your favourite package manager) on other Linux distributions. I've also tested caching and deploying Windows applications. It works quite well, but extra steps are required to download all the dependencies required to setup WinRM connections.<br />
<br />
An obvious benefit of the installer pattern is that it makes deployments quite a bit faster since everything is pre-cached. If you have to install a large package on a set of nodes (for example, a SolrCloud cluster) then the speedups can be quite substantial. If you have to deploy a full stack (from app to monitoring on multiple nodes), then even more so. From start to finish, our complex stack takes about 7 minutes to deploy and configure.
The obvious downside of the approach is that it forces you to split your deployment into 2 stages (cache and install). Testing also requires a bit more rigour as you really need to ensure that your target machines are cut of from the Internet (otherwise you might be testing the wrong thing). For that I use Vagrant to launch clusters using private networks configured with no Internet access.
<br />
<br />
Most people nowadays are lucky enough to host things in the Cloud or on their own infrastructure. If you are not, and if tightly controlled environments are an issue for you, I am hoping that you've found this post useful.
<br />
<br />
As always, comments and questions are welcome.
<br />
<br />
Guillaume.Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-54377582104993855632015-01-07T23:54:00.000-08:002015-01-07T23:54:36.918-08:00Ansible and WindowsI've been tasked to look at various configuration management solutions to setup and manage Windows machine.<br />
<br />
As I already use Ansible in quite a few places on Linux, I decided to try the relatively new <a href="http://docs.ansible.com/intro_windows.html">Windows support</a> (as of 1.7).
I set myself a fairly simple task, that is installing and configuring CouchDB from scratch.I later added the ability to install Microsoft Silverlight.<br />
<br />
Setting up the Windows environment was fairly straightforward, I just followed the <a href="http://docs.ansible.com/intro_windows.html#windows-system-prep">instructions</a> from the Ansible website and I was able to setup my Windows7 box with WinRM.<br />
<br />
Ansible has managed to keep the same operating model on both Windows and Linux: it's all agent-less, you can invoke ad-hoc commands on both Windows and Linux, and even write Playbooks where some sections take place on Windows while others on Linux.
The list of Windows specific modules is a bit small (but hopefully growing).<br />
<br />
For my particular job, I needed to tweak the CouchDB configuration once it is installed. Normally on Linux, I would use the template module, which is not yet available on Windows. I eventually settled for a hybrid option, where I apply the template locally on the Linux 'master' node and then I instruct the Windows box to fetch the templated content via http (I run an HTTP server on my Ansible 'master' node).<br />
<br />
Here is what the Playbook looks like:
<script src="https://gist.github.com/kafecho/ae9f6cf1540647dd5cc5.js"></script>
I do find the Windows philosophy for installing software very weird (it mostly assumes a GUI and someone clicking on things to proceed), but thankfully, I was able to work out the steps to install things silently.
<br />
<br />
Overall, I am quite happy with the result. The next version of Ansible (1.9) should have improved Windows support with extra modules for doing file copy and dealing with templates.<br />
<br />
Good job to the Ansible team for keeping it very simple.<br />
<br />Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com7tag:blogger.com,1999:blog-1936859683699997238.post-502554136404589882014-07-15T23:51:00.001-07:002014-07-15T23:51:34.960-07:00Continuous delivery with Ansible and JenkinsFor the past few weeks, I've been working on a Continuous Delivery pipeline (a la <a href="http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912">Jez Humble</a>) for a software based broadcast system that we are building at <a href="http://www.quantel.com/">Quantel</a>.<div>
<br /></div>
<div>
<a href="http://www.ansible.com/">Ansible</a> has quickly become a crucial piece of technology to make this possible. The pipeline is far from trivial. Compiling code and running unit tests is fairly easy but things start to get complicated very quickly when it comes to running integration tests, acceptance tests and performance tests. Tests need the right environment to run in (the stack is essentially a set of distributed systems with 3rd party dependencies like CouchDB or Solr). We also want to deploy to a Linux clusters, to Windows machines and for demos and manual tests purposes, we want to build Vagrant images or Docker containers so that people can try the software on their own machine.</div>
<div>
<br /></div>
<div>
From day 1, I decided to use Ansible to automate everything that could be automated. The 1st rule was as follow: if I had to do something more than once, it would have to be automated. The 2nd rule was a follow: if possible, don't SSH directly into the host, but instead use Ansible to run SSH commands. That way, ensuring that whatever I do is always captured into a playbook which can be reliably replayed <b>ad infinitum</b>.</div>
<div>
<br /></div>
<div>
So far, I am managing the following things with Ansible.</div>
<div>
<br /><div>
We use Jenkins as the build server and setting up Jenkins from scratch is taken care by an Ansible playbook. I have fairly simple Ansible script which installs precise versions of the JDK, Maven, Git, etc, etc.. All the things you need to build. User accounts and SSH permissions are also taken care of. If for some reason the Jenkins were to die, I could easily rebuild it. </div>
<div>
</div>
<div>
I've also written an Ansible playbook to configure Linux Jenkins slaves. This takes care of creating the correct users, manage SSH keys, etc... So far I have setup about 10 Jenkins slaves and it was very easy.</div>
<div>
<br /></div>
<div>
When the code is built, I use Ansible to deploy a given version of the stack on a virtual machine which is itself managed by Ansible (via Vagrant). Ansible takes care of installing Java and the right dependencies so that integration or performance tests can run. A similar version of the playbook can be used to deploy the software stack to a CentOS staging environment.</div>
<div>
<br /></div>
<div>
Some of the tests that we run have complicated dependencies. For example, in one of the tests, we process RTP video streams and extract subtitles and timecode information and assert that they are present and correct. For that, we use the <a href="http://www.videolan.org/projects/multicat.html">Multicat</a> tool which is rather straight forward to turn into a playbook.</div>
<div>
<br /></div>
<div>
Steve Jobs could have well said "<b>there is a playbook for that</b>". The ease with which you can turn written instructions into a working playbook that is straight forward to read and maintain is one of the things I like best about Ansible.</div>
<div>
<br /></div>
<div>
The best thing about all this is that I manage all this setup from my house controlling servers 9000 kilometres away. </div>
<div>
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-39666574610289352872014-04-07T23:12:00.001-07:002014-04-07T23:19:37.456-07:00Introduction to Spray.io at JoziHub.Last night (07/04/2014), I gave an introduction to the <a href="http://www.spray.io/">Spray</a> framework to the <a href="http://www.meetup.com/Scala-Jozi/events/164823812/">Scala Johannesburg User Group</a>. For this event, we returned to JoziHub where Scala-Jozi started in July 2012.<br />
<br />
The presentation was half <a href="http://kafecho.github.io/presentations/introduction-to-spray/">slides</a> and half <a href="https://github.com/kafecho/introduction-to-spray">code samples</a>.<br />
<div>
<br /></div>
<div>
The event last night was the busiest meetup we've had so far with about 20 people. The talk was well received and I had plenty of questions. I specially liked the ones I could not answer, since that means I have to do more homework to make the presentation even better. </div>
<div>
<br /></div>
<div>
When I started Scala-Jozi back in July 2012, there was not an awful lot of Scala going on in Joburg (or in SA). Fast forward to yesterday, and things have changed quite a bit with more and more people using Scala in their day to day job, including a top secret startup using Scala, Akka and Spray. </div>
<div>
<br /></div>
<div>
Many thanks again to JoziHub for letting us use their facility. We all really like the space and the facilities it offers (very nice projector, super fast Internet, easy access). </div>
<div>
<br /></div>
<div>
Next month, Andreas will be giving an <a href="http://www.meetup.com/Scala-Jozi/events/170835742/">Introduction to Functional Programming</a>, and we will be looking at how languages such as Haskell approach FP.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-28847363738092605052014-01-14T22:35:00.001-08:002014-01-14T22:35:34.312-08:00My experience with LittleBits<span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">After reading quite a few good things about the tech (and following some recommendations from Twitter), I purchased a <a href="http://littlebits.cc/">LittleBits</a> Starter Kit directly from the LittleBits web site. The item was delivered via UPS very quickly (about 3 days). Besides the shipping costs, I also had to pay some import tax when the item was delivered to my home in Johannesburg.</span><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">I bought the kit for my son who is 5 years old in order to teach him the basics of electronics. The box recommends a minimum age of 8, but we had a go anyway. The kit is extremely easy to use, and the concepts of components that you snap together using magnets is pure brillance.</span><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">My son had fun playing with the components one by one (the buzzer in particular made him giggle a lot, the light controlled buzzer even more). The box comes with a booklet which allows you to make electronics-powered craft. Within the 1st day, we had built together a torch with a switch. Later on, this was improved and turned into a torch which can light itself up at night with a buzzer which indicates how dark it is. My 3 year old also had a go and was able to snap together components (it seemed a bit random, but the end result worked). Again, they were quite keen on the buzzer. </span><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">The motor component has an attachement that is LEGO compatible so you can also make simple moving LEGO activated by light or by a switch (so we can reuse the ton of blocks we have). </span><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">LittleBits is one of the things I wish I had as a kid, and could be very useful in classrooms to teach kids the basic of electronics. It is very easy to use.</span><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><br style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;" /><span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">The box is a bit pricey but given the sheer amount of stuff you can build with it, I think it is well worth it.</span><br />
<span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>
<span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;">Here is a little component which reports how dark a room is. It is made of a power component, a light sensor and an LED array. Super easy and quick to put together.</span><br />
<span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoTnc4O9GkwW-KLWGtiWanbVE02vK4eMsM_dnHjgLVS96lhuE_C0PUmBdliyoBccMZrdlSx0VZDXP1wSG9vEJVdvKIN9D3adRzPKN5oQDXQJZQenWcsG_eBsp37MKRwFz8bxqr2NUV7SYb/s1600/P1153622.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoTnc4O9GkwW-KLWGtiWanbVE02vK4eMsM_dnHjgLVS96lhuE_C0PUmBdliyoBccMZrdlSx0VZDXP1wSG9vEJVdvKIN9D3adRzPKN5oQDXQJZQenWcsG_eBsp37MKRwFz8bxqr2NUV7SYb/s400/P1153622.JPG" width="400" /></a></div>
<span style="font-family: 'lucida grande', 'Lucida Sans Unicode', tahoma, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-45460314704292150152013-10-24T00:42:00.000-07:002013-10-24T00:42:09.561-07:00Using Akka for video editing.I am super lucky to work for <a href="http://www.quantel.com/">Quantel</a> a UK-based company which makes high-end video-editing hardware and software for post-production studios and TV broadcasters.<br />
<br />
Our equipment has been used to edit many movies over the years (<a href="http://news.creativecow.net/story/863009">Avatar</a>, Lord of the Rings) and is used by major broadcasters throughout the world (the <a href="http://broadcastengineering.com/products/bbc-deploys-quantel-news-editing-playout-products-broadcasting-house-west-1-project">BBC</a>, BSkyB, Fox, etc..).<br />
<br />
Quantel's broadcast systems roughly follow a client/server architecture. A server is a beefy machine which can ingest a large number of high definition feeds very quickly and make them available via speedy network connections to video editing workstations. For reason of speed and efficiency, the server is a blend of custom hardware (DSP, FPGA), custom operating system, custom network stack, etc, etc.. Video editing workstations are mostly C++ again because they have to move and display things very quickly.<br />
<br />
At Quantel, I work in the broadcast domain, mostly building JVM RESTful web services to help our video editing workstations with metadata level workflows (find video clips with certain properties, organise assets so they can be found).<br />
<br />
Although a lot of the current stack uses custom hardware and proprietary software, there has been a trend in the industry to produce solutions based on off the shelf operating systems, industry storage and standard video formats. To that end Quantel has started a new product architecture called RevolutionQ ( more information <a href="http://www.quantel.co.uk/page.php?u=130fd189b6b55469e56f8f1282150664">here</a>) which is based around a video editing standard called AS02.<br />
<br />
Unlike the existing Server technology, the new stack relies heavily on the Scala, Akka and the JVM for doing some of the video processing.<br />
<br />
We are building Scamp, a video processing framework out of Akka which works in a similar fashion to <a href="http://gstreamer.freedesktop.org/">GStreamer</a>, but is entirely based on Actors and runs on the JVM. We've used Actors to implement video processing pipelines where each stage is a standalone media processing operation such as extracting H264 video access units, manipulating AAC audio blocks, AC3 data, timecode, metadata, etc, etc.. Most of what we do does not require transcoding (I think this would be too costly), we mostly do "transwrapping" which is the process of extracting bits from one format and wrapping it in a different format.<br />
<br />
Currently we are using Scamp to build a video recorder which can record a large number of satellite feeds with high-bitrate content (over 50 Mb/s) and make that content available for near-live video editing via HTTP. The recorder works in a similar way to a PVR. It receives a multiplexed signal (MPEG2 Transport Stream) which is a series of small packets. We use a chain of actors to re-combine packets into their corresponding tracks (video track, audio track) and actors to persist video and audio data to storage. The framework is quite flexible as you essentially plug-in actors together so you can further process the video and audio essence for long-term storage or re-streaming via HTTP or other protocols. So for example, you could build a system which records a satellite feed and make the content available straight away via HTTP live streaming or MPEG DASH.<br />
<br />
But more importantly, the system is built to be non-blocking and purely reactive. As Akka takes care of threading and memory management, we don't have to worry about that and the system scales up very well (we run on 24 cores machines and Akka has no trouble spreading workloads around). The code is also very immutable thanks to the use of the ByteString structure. In my experience, immutability makes it very easy to reason about the code. If you've looked at the source code of existing C++ video frameworks like VLC or some of the Intel video processing libraries, you will find a lot of rather hairy threading / locking / shared memory code which is rather hard to write properly.<br />
<br />
I appreciate we are not writing code which can run as fast as its C++ equivalent, but we can write the code faster with the confidence that it will auto-scale because the framework takes care of that.<br />
<br />
Akka has many interesting features. The most recent ones I've been experimenting with are FSM for representing state machines. There is an industry standard for modelling a processing job which captures video (think of it as if someone standardised the protocol of a PVR). A job can be in different states, and based on the state can perform certain actions. It turns out Akka FSM are a very elegant way of modelling this in very few lines of code.<br />
<br />
Another interesting feature is the new Akka I/O layer and the use of pipelines for encoding and decoding. A lot of trans wrapping is just that, extracting binary data from an envelope, combining it with more binary data into a different structure and so on. For example, when you demultiplex an RTP stream, you build the following chain: UDP -> RTP -> TS packets -> PES Packets. The new Akka pipeline approach (which comes from Spray) makes it very easy to build typed, composable and CPU efficient chains like those.<br />
<br />
And finally, I am also looking at using Spray (soon to be Akka-Http) to build web interface, so potentially we will have a RESTful way of controlling the recording of a lot of streams at the same time.<br />
<br />
Towards a giant PVR.<br />
<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com2tag:blogger.com,1999:blog-1936859683699997238.post-48860870504831081432013-07-31T03:21:00.001-07:002013-07-31T03:21:31.255-07:00Ansible playbook for BigCouch<div>
Last week, as I was stuck at home looking after my sickly little one (nurseries are wonderful breeding grounds for germs), I decided to kill time by reading about <a href="http://bigcouch.cloudant.com/">BigCouch</a>, a cluster solution for CouchDB which is just about to be <a href="https://blogs.apache.org/couchdb/entry/welcome_bigcouch">merged</a> into the CouchDB trunk. </div>
<div>
<br /></div>
<div>
As Ansible is a nice hammer, and BigCouch was decidedly looking like a -clustered- nail, I decided to cook a playbook to install and configure BigCouch. </div>
<div>
<br /></div>
<div>
The playbook is at: <a href="https://github.com/kafecho/ansible-playbooks">https://github.com/kafecho/ansible-playbooks</a></div>
<div>
<br /></div>
<div>
As you can see, it is not terribly complicated, and nicely translates the english instructions into actionable code. </div>
<div>
<br /></div>
<div>
The one thing I don't like about the playbook is the step to configure the cluster itself. It is done via RESTful calls to some BigCouch URLs, and I really ought to check the state of the cluster before modifying it.</div>
<div>
<br /></div>
<div>
For instance, there is no need to add a node to a cluster if it is already there. For the time being, I just ignore the status code coming from the HTTP PUT, but I should really do something more clever, maybe by using CURL to read the status of the cluster before attempting to change it.</div>
<div>
<br /></div>
<div>
I am sure this must be possible with Ansible, but I have not yet figured it out.</div>
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-6990181181585267342013-07-28T07:17:00.000-07:002013-07-28T07:17:08.183-07:001st Scala Jozi meetup.I held the first ever <a href="http://www.meetup.com/Scala-Jozi/events/126551302/">Scala Jozi meetup</a> on the 22nd of July at <a href="http://jozihub.org/">JoziHub</a>, a very nice space to foster collaboration and help local startups.<div>
<br /></div>
<div>
About 14 people attended my <a href="http://kafecho.github.io/presentations/introduction-to-scala">Introduction to Scala</a> "talk". As I had no idea of the skill levels of the audience, I kept the presentation fairly generic. </div>
<div>
<br /></div>
<div>
I was very happy about the feedback I received shortly after the presentation. For the next meetup, I am thinking of doing something a lot more practical, some simple coding following the "Koans" approach. </div>
<div>
<br /></div>
<div>
<a href="http://www.scalakoans.org/">ScalaKoans</a> seems like a good start, I might use it, or make my own version with a more local flavour. </div>
<div>
<br /></div>
<div>
Many thanks to JoziHub for hosting the events: they are super nice people, the facilities are great.</div>
<div>
<br /></div>
<div>
I haven't yet scheduled the next meetup, but I am trying to run them on a regular basis (i.e. 2nd monday of each month). </div>
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-38761510064248268702013-07-18T23:54:00.000-07:002013-07-18T23:54:48.139-07:00ThoughtWorks Tech Radar Event.Last night, in flashy Sandton, ThoughtWorks South Africa gave a <a href="http://info.thoughtworks.com/tech_radar_event_18_july_2013_registration_page.html">presentation</a> about their Tech <a href="http://www.thoughtworks.com/radar">Radar</a> with an emphasis on continuous delivery.<br />
<br />
The TechRadar itself is quite interesting, although most of the features in it are not really surprising, especially if part of your job as a researcher is to research what is out there.<br />
<br />
Quite a few companies have started doing their own radar to assess on a regular basis what technology stacks they should be using. This seems like a good idea.<br />
<br />
They detailed how they build the radar. It is based on continuous feedback from technical people on what works and what does not, the information is then collated and curated by a "board". Rinse. Repeat.<br />
<br />
On the topic of continuous delivery, it is something I would really like to adopt in the project I am working on, purely because we are such a small team and have to rely on as much automation as possible. I don't think it is that hard as most of the tooling is there. As ThoughtWorks mentioned, in most cases, the barriers to adoption are not technical, they are mostly social. That stuff is not really new, check out those prescient <a href="http://www.hpl.hp.com/research/smartfrog/presentations/distributed_testing_with_smartfrog_slides.pdf">slides</a> from HP Labs from 7 years ago, but the technology has become a lot easier.<br />
<br />
Tool wise, things are (and have always been) a lot easier on Linux than on Windows. ThoughtWorks mentioned that they were having a lot of success managing Windows systems with Chef and Octopus (for .Net apps).<br />
<br />
They had a lot of success with NuGet (http://nuget.org/) which is a tool to manage dependencies for .Net and C++ projects.<br />
<br />
On the Linux side, some people are moving to Ansible because it is a lot easier to setup and learn. I spoke to someone in charge of https://www.cloudafrica.net/ and one of his engineers was able to setup a Riak 4-node cluster from scratch in 4 hours (this includes the time it took to install and learn Ansible).<br />
<br />
Other points I've noted:<br />
<br />
<ul>
<li>Use virtual machines for your services.</li>
<li>Your entire infrastructure can be described by code and data and resides under source control.</li>
<li>You should be able to throw away VMs and re-provision new ones very quickly (hence the need for a configuration management tool like Chef, Ansible).</li>
<li>Test the robustness of your system by randomly killing machines. This one very much reminded me of Colin Low's <b>Brutality Manager</b> that he built for the <a href="http://www.hpl.hp.com/SE3D/">SE3D</a> project I worked on at HP Labs.</li>
</ul>
<div>
<br /></div>
<div>
If anything, the event made me really appreciate the freedom that I have working in an R&D environment (now at <a href="http://www.quantel.com/">Quantel</a>, and before that at <a href="http://www.hpl.hp.com/">HP Labs</a>): that is the ability to continuously explore new technologies and use it in projects. I am very grateful for that.</div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-44553942392050629852013-07-12T02:14:00.000-07:002013-07-12T02:14:26.957-07:00An easy way to find bugs<p>For work, I am building a video processing framework based on <a href="http://www.akka.io/">Akka</a> actors. Among other things, I have to read and write packetised binary such as Mpeg2 Transport Stream or RTP packets.
</p>
<p> Scala provides 2 techniques which makes this easier. I use case classes to model the various types of packet, and type classes to marshall / unmarshall the objects to and from a binary representation. Type classes can be used for many other things but serialisation (see this <a href="https://github.com/debasishg/sjson/wiki/typeclass-based-json-serialization">JSON</a> example) is a very pragmatic use case.
</p>
<p>
Here is an example for an <a href="http://en.wikipedia.org/wiki/Real-time_Transport_Protocol">RTP Packet</a>
<pre class="brush:scala">/** Model an RTP Packet. See [[http://en.wikipedia.org/wiki/Real-time_Transport_Protocol]] for more details.*/
case class RtpPacket(
marker : Boolean,
payloadType : Byte,
sequenceNumber : Short,
timestamp : Int,
ssrc :Int,
csrcList : List[Int],
extensionHeader : Option[RtpPacketExtensionHeader],
payloadData : ByteString,
paddingBytes : ByteString
){
require(csrcList.length <= 15,s"An RTP packet can have between 0 to 15 CSRC elements. The provided list contains ${csrcList.length} elements.")
require(paddingBytes.length <= 254 , "An RTP packet cannot have more than 254 bytes worth of padding.")
}
</pre>
</p>
<p>
While the case class is straight forward, the corresponding type classes are a bit more complicated because they have to pack and unpack fields in bit, short, ints, longs, or other fields with arbitrary bit length value. This kind of code is difficult to write properly, a bit difficult to read and more importantly demands some serious testing.
</p>
<p>
Instead of generating test values by hand, I use a 3rd technique: property-based testing provided by the <a href="http://www.scalatest.org/">ScalaTest</a> framework in combination with <a href="https://github.com/rickynils/scalacheck">ScalaCheck</a>.
</p>
<p>
Essentially, the testing framework generates test values automatically and asserts that what I serialize to binary must deserialize to the same value.
</p>
<p>
The code is as follows:
<pre class="brush:scala">
"An arbitrary RtpPacket" must {
"be correctly encoded / decoded to / from binary" in {
val rtpPacketExtensionHeaderGen = for(
option <- arbitrary[Option[Short]];
bytes <- arbitrary[Array[Byte]].filter(a => a.length %4 == 0).map(a => ByteString(a))
)yield option.map(id => RtpPacketExtensionHeader(id, bytes))
val byteStringGen = arbitrary[Array[Byte]].map(a => ByteString(a))
val rtpPacketGen = for(
marker <- arbitrary[Boolean];
payloadType <- Gen.choose[Byte](0,0x7F);
sequenceNumber <- arbitrary[Short];
timestamp <- arbitrary[Int];
ssrc <- arbitrary[Int];
csrcList <- Gen.listOfN(15,arbitrary[Int]);
extensionHeader <- rtpPacketExtensionHeaderGen;
payloadData <- byteStringGen;
paddingBytes <- byteStringGen.filter( bs => bs.length <=254)
) yield RtpPacket(
marker,
payloadType,
sequenceNumber,
timestamp,
ssrc,
csrcList,
extensionHeader,
payloadData,
paddingBytes
)
forAll(rtpPacketGen)({ packet =>
val packed = ByteStringSerialization.write(packet)
val parsed = ByteStringSerialization.read[RtpPacket](packed)
parsed.paddingBytes must equal (packet.paddingBytes)
})
</pre>
</p>
<p> There is not much code to write, yet the framework is ferociously efficient at finding bugs in my marshalling code. Here are some of the issues ScalaTest "found" for me:
<ul>
<li>If I use a byte field to store the length of an array, surely the array must be less than 255 bytes</li>
<li>A Scala byte is signed, so if you store a length value in a byte, don't forget to properly mask it back to an Int</li>
<li>Stupid errors when I confuse bit offset with byte offset.</li>
<li>etc...</li>
</ul>
</p>
<p>
Ever since I started using ScalaCheck, I've been amazed at its abilities to hone on bugs in my code. It is a truly powerful tool.
</p>
<script type="text/javascript">
SyntaxHighlighter.highlight();
</script>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-28363968979506818552013-07-08T06:55:00.000-07:002013-07-08T06:55:03.699-07:00Upcoming talks / presentations.Hi all,<br />
<br />
If you happen to be in or around Johannesburg (why would not you), I will be giving two tech talks soon. One is an <a href="http://www.meetup.com/Scala-Jozi/events/126551302/">introduction to Scala</a> which I am giving to the Scala Jozi meetup group (which I also started a short while ago).<br />
<br />
The following month I will be talking about <a href="http://www.ansibleworks.com/">Ansible</a> to the Johannesburg Linux User Group. You can find details about that meetup <a href="http://www.meetup.com/Jozi-Linux-User-Group-JLUG/events/128044972/">here</a>.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-6778014294401649012013-04-15T23:13:00.002-07:002013-04-15T23:30:26.174-07:00Bit Wrangling in Scala part deux (with Macros)Hi all,<br />
<br />
In my previous post, I talked about a simple library for extracting bit-level information from a sequence of bytes stored in ByteString.<br />
<br />
Although the library works as expected, it is a little bit slower than bit shifting code that you would write by hand, even though the code is a bit difficult to read at 1st glance.<br />
<br />
For example, let's say you want to read 7 bits at offset 9 within a byte string, then the following would be equivalent:<br />
<br />
<ul>
<li>int(<span class="s2">bytes</span>,<span class="s3">9</span>,<span class="s3">7</span>)</li>
<li>(<span class="s1">bytes</span>(<span class="s2">1</span>) & <span class="s2">0x7f</span>)</li>
</ul>
<div>
However, the 1st approach is a bit slower, because the code has got branches and loops, etc.. See the code below:</div>
<div>
<blockquote class="tr_bq">
<span class="s1" style="font-size: x-small;"> </span><span style="font-family: Courier New, Courier, monospace;">/** Read an integer value at a given bit offset with a given length within a ByteString */<br /> <span class="s2">def</span> int(bytes: ByteString, offset: Int, length: Int): Int = {<br /> <span class="s2">val</span> <span class="s3">startByte</span> = offset / <span class="s4">8</span> <span class="s2">val</span> <span class="s3">endByte</span> = (offset + length - <span class="s4">1</span>) / <span class="s4">8</span><span class="s1"> </span><span class="s2">var</span><span class="s1"> </span><span class="s5">i</span><span class="s1"> = </span>startByte<br /> <span class="s2">var</span> <span class="s5">value</span> : Int = <span class="s4">0</span> <span class="s2">var</span> <span class="s5">toRead</span> : Int = <span class="s4">0</span> <span class="s2">var</span> <span class="s5">readSoFar</span> : Int = <span class="s4">0</span> <span class="s2">while</span> (<span class="s5">i</span> <= <span class="s3">endByte</span>) {<br /> <span class="s2">if</span> (<span class="s5">i</span> == <span class="s3">startByte</span>){<br /> <span class="Apple-tab-span"> </span><span class="s2">val</span> <span class="s3">firstOffset</span> = offset % <span class="s4">8</span> <span class="Apple-tab-span"> </span><span class="s5">toRead</span> = <span class="s6">length</span>.min(<span class="s4">8</span> - <span class="s3">firstOffset</span>)<br /> <span class="s5">value</span> = int( bytes(<span class="s5">i</span>), <span class="s3">firstOffset</span>, <span class="s5">toRead</span>)<br /><span class="s1"> </span>readSoFar<span class="s1"> += </span>toRead<br /> } <span class="s2">else</span>{<br /> <span class="s5">value</span> = (<span class="s5">value</span> << <span class="s5">toRead</span>)<br /> <span class="s5">toRead</span> = <span class="s7">8</span>.min(length - <span class="s5">readSoFar</span>)<br /> <span class="s2">val</span> <span class="s3">lastValue</span> = int( bytes(<span class="s5">i</span>), <span class="s4">0</span>, <span class="s5">toRead</span>)<br /> <span class="s5">value</span> = <span class="s5">value</span> | <span class="s3">lastValue</span><span class="s1"> </span>readSoFar<span class="s1"> += </span>toRead<br /> }<br /> <span class="s5">i</span> += <span class="s4">1</span> }<br /><span class="s1"> </span>value<br /> }</span></blockquote>
</div>
<div>
In order to speed up code, I decided to look at Scala Macros. With Macros, I was essentially able to shift most of the computation to compile time, and all you end up with an abstract syntax tree which captures the essence of the bit shifting computation. The key thing was to understand how to create and combine ASTs, but once I got that, the rest of the work was not too complicated. </div>
<div>
<br /></div>
<div>
Here is the macro to extract boolean value at a given offset:</div>
<div>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace;"><span class="s1"> </span>/** Implementation of the boolean macro*/<br /> <span class="s2">def</span> booleanImpl(c: Context)(bytes: c.Expr[ByteString], offset: c.Expr[Int]): c.Expr[Boolean] = {<br /> <span class="s2">import</span> c.universe._<br /><span class="s1"> </span>implicit<span class="s1"> </span>val<span class="s1"> </span><span class="s3">cc</span><span class="s1">: c.</span>type<span class="s1"> = c</span> <span class="s2">val</span> <span class="s4">Literal</span>(<span class="s4">Constant</span>(<span class="s3">o</span>: Int)) = offset.tree<br /> <span class="s2">val</span> <span class="s3">what</span> = nthElement(c)(bytes.tree, <span class="s3">o</span> / <span class="s5">8</span>)<br /> <span class="s2">val</span> <span class="s3">masked</span> = <span class="s4">Apply</span>(<span class="s4">Select</span>(<span class="s3">what</span>, op(<span class="s6">"&"</span>)), const(<span class="s4">bitMasks</span>(<span class="s3">o</span> % <span class="s5">8</span>)))<br /> c.Expr(<span class="s4">Apply</span>(<span class="s4">Select</span>(<span class="s3">masked</span>, op(<span class="s6">"!="</span>)), const(<span class="s5">0</span>)))<br /> }</span></blockquote>
</div>
<div>
Basically, the macro expects to find the <b>offset</b> value at compile time, so it has to be a constant literal. The macro constructs the AST by building the following sub-trees:</div>
<div>
<ul>
<li>A tree to select the nth element within a byte string</li>
<li>A tree to apply the bit AND operator with a given mask to the tree above.</li>
<li>And finally a tree to apply a boolean unequal to the tree create above. </li>
</ul>
<div>
The key thing is that instead of manipulating an expression directly, you manipulate ASTs, but otherwise, the logic is the same. </div>
</div>
<div>
<br /></div>
<div>
Speed-wise the macro version is faster that its non-macro equivalent, and it is as fast as a hand written version, but the benefit (IMHO)being that the code is easy to read and write. </div>
<div>
<br /></div>
<div>
I've posted the macro code on my github repository. Have a look at: <a href="https://github.com/kafecho/BitWrangler">https://github.com/kafecho/BitWrangler</a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-49142357219903826512013-04-11T03:38:00.000-07:002013-04-11T03:39:13.005-07:00Bit wrangling in ScalaRecently, at work, I've been using actors in Scala with Akka to create an application which does, among other things, processing of the <a href="http://en.wikipedia.org/wiki/Real-time_Transport_Protocol">RTP Protocol</a>.<br />
<br />
I've been <a href="https://gist.github.com/kafecho/5353393">playing</a> with the new I/O features in Akka 2.2 which are really neat when it comes to receiving packets (presented as Akka ByteString) over UDP.<br />
<br />
The RTP protocol, although not very complicated, requires a fair amount of bit-level manipulation to extract useful information from the UDP packets. If you are familiar with bit-level operations, it is not difficult to write code to parse packets, however, that code is a bit difficult to read. When you get into RTCP or H264 byte streams, things get a lot more complicated, so I am looking for ways to keep the code easy to process.<br />
<br />
I've been experimenting with a little utility class for bit-level manipulation, based on Akka ByteStrings.<br />
Here is what it looks like:
<br />
<div class="gistLoad" data-id="5362350" id="gist-5362350">
<br />
The stuff works quite well, but compared to a hand-written parser, it is a bit slower (just under twice as slow). I have not done Scala Macros before, but I have the feeling that this is the kind of computation which could be moved to compile-time in a macro. Basically turning the bit-shifting code into a set of instructions which can be optimised at compile time. Ideally, something like Erlang's bit pattern matching, although I don't know if it is possible in Scala.<br />
<br />
I will try that next.<br />
<br />
<br />
<br />
<br />
<script src="https://raw.github.com/moski/gist-Blogger/master/public/gistLoader.js" type="text/javascript"></script></div>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com2tag:blogger.com,1999:blog-1936859683699997238.post-51798137310828821152013-03-25T23:56:00.000-07:002013-03-25T23:56:00.697-07:00Scala in ten minutes.Last night, I did a quick presentation about Scala at the Johannesburg JUG (or <a href="http://www.meetup.com/Jozi-JUG/">JoziJUG</a> as it is affectionately known).<br />
<br />
The event held at the <a href="https://www.google.co.za/maps?q=Oracle+maxwell+drive&hl=en&ll=-26.043346,28.097481&spn=0.004646,0.005665&sll=-26.01672,28.127379&sspn=2.379261,2.900391&hq=Oracle&hnear=Maxwell+Dr,+City+of+Johannesburg+Metropolitan+Municipality,+Gauteng&t=h&fll=-26.04352,28.097641&fspn=0.004646,0.005665&z=17">Oracle</a> building was good fun, and I got to meet to cool people.<br />
<br />
When preparing the slides I found it very difficult to introduce Scala in 10 minutes, so I should probably try again and have a nicer, more concise slide deck. Scala has got so many nice features, it is hardly impossible to cram them in a 10 minutes presentation.<br />
<br />
I've posted my slides and some demo code under my Github account @ <a href="https://github.com/kafecho/scala-in-ten-minutes">https://github.com/kafecho/scala-in-ten-minutes</a>
Thanks to the JoziJUG for a nice evening!!!<br />
<br />Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-52153631352728273852013-03-20T02:06:00.000-07:002013-03-20T02:06:57.156-07:00Using Ansible for configuration managementIn a previous life, I used to work as a research engineer in <a href="http://www.hpl.hp.com/bristol/">HP Labs Bristol</a> (UK).<br />
<br />
Over the 11 years or so I was there I worked on very cool projects (including a talking pot plant).<br />
<br />
By far the project I had most fun with was FrameFactory, a Cloud Computing CGI rendering service that we rolled out as part of the <a href="http://www.hpl.hp.com/se3d/se3d-background.html">SE3D showcase</a>.
FrameFactory was essentially a fairly complex distributed system with various moving parts that had to be configured and coordinated properly. To that end, we designed the system around <a href="http://wiki.smartfrog.org/wiki/display/sf/SmartFrog+Home">SmartFrog</a> and <a href="http://wiki.smartfrog.org/wiki/display/sf/Anubis">Anubis</a>. This was handy as the engineering team for both of those projects was sitting in the nearby cubicles.<br />
<br />
Anubis let us do 10 years ago what you now take from granted with Zookeeper, that is directory services and coordinating distributed system. SmartFrog is a very generic configuration management tool with a very nice DSL for expressing configuration data and a deployment engine for taking that and pushing onto remote nodes.
I used SmartFrog in a lot of projects, and I even wrote a compiler plug-in for it that would auto-generate Java code from EMF. So as you can see it is very generic.<br />
<br />
The downside of SmartFrog is that it is too generic, so if you wanted to use to it to install software and configure a Linux node with a particular role, you would still have to write a lot of low level drivers yourself. Which we did, and I remember writing SmartFrog scripts to deploy Xen vms and move them around using live migration. This was fun, but it felt that you had to write too much (Java) code.
<br />
<br />
Forward to 2013, and I am finding myself working on another large scale distributed system that eventually has to be configured and deployed on Linux.
I am not a Linux sysadmin, and I knew about <a href="http://www.opscode.com/chef/">Chef</a> and <a href="https://puppetlabs.com/">Puppet</a>, but given the amount of workload in that project (I have plenty of my plate), I got slightly worried just by reading the various ways Chef or Puppet (Solo, master, knife, etc..) could be used.<br />
<br />
Then I came across <a href="http://ansible.cc/">Ansible</a> and I was really pleased to find myself up and running within 30 minutes of reading the tutorial.
The main web page does a good job of describing what Ansible does so I won't bother repeating it here. What I found is that Ansible makes it possible to turn Linux How-Tos documents (like how to enabled EPEL on CentOS) into workable, reusable scripts that are still very easy to understand.
<br />
<div class="gistLoad" data-id="5203217" id="gist-5203217">
<br />
Loading ....</div>
<br />
Ansible has a comprehensive set of modules for installing packages, copying files over SSH, tweaking remote text files via its template mechanism.
As a more complex example, I tried to use Ansible to see if I could use to deploy the lower stack required for high availability on Linux. Basically, this means installing Corosync, Pacemaker and ensuring that they are properly configured with the right IP addresses and so on.<br />
<br />
What I had done so far was to capture all the steps required in documentation that could be reproduced (i.e. retyped), however I realized I could just come up with Ansible playbooks (recipes) to do the same and they would be just as clear.
Here are a set of scripts which are sufficient to setup Corosync and Pacemaker working in UDP mode.
<br />
<br />
<div class="gistLoad" data-id="5099412" id="gist-5099412">
Loading ....</div>
<br />
I also recently created playbooks to deploy CouchDB and Solr. I think tweaking those playbooks to setup replicated CouchDB or Solr clusters should not be too hard either.<br />
<br />
Although I could (and should) give Puppet and Chef a closer look, I greatly value simplicity and I've been really impressed by Ansible's ease of use (the community is very active and friendly as well).<br />
<br />
I've recommended that we use it as the Configuration Management solution for our current project.
<script src="https://raw.github.com/moski/gist-Blogger/master/public/gistLoader.js" type="text/javascript"></script>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com6tag:blogger.com,1999:blog-1936859683699997238.post-42492707234689634812013-03-14T23:10:00.000-07:002013-03-14T23:10:14.104-07:00Rebooting.... 50 % complete.Last year, in February 2012, we decided to relocate the family to the sunnier shores of South Africa. After quite a few years spent in France and in the UK, we wanted to be closer to (at least one) family.<br />
<br />
We did it and eventually left the UK for good at the end of August 2012. It's been manic since we've arrived here, things are very different in so many ways, yet coming from the French Antilles (Guadeloupe, Martinique), the whole place feels somehow familiar.<br />
<br />
Now that things are somehow settled, I am planning on writing a bit more about what I do at work, and also about this big and vast country we live in.<br />
<br />
Stay tuned.Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-31745921741608592902012-04-11T04:41:00.003-07:002012-04-11T04:41:51.373-07:00Upcoming replication features in MySQL 5.6<h3>
Overview</h3>
<div>
The upcoming version of MySQL (5.6) has some neat features to simplify and automate the management of clusters of replicated databases (See [1]).</div>
<div>
<br /></div>
<div>
MySQL now provides utilities ([2]) to automatically detect the failure of a master database and automatically promote the most viable slave to take over. It is also easy to perform controlled switch over for maintenance purposes.</div>
<div>
<br /></div>
<div>
I had been wondering about writing a simple utility using Scala (because I quite like that language) and Zookeeper for creating self-healing clusters of replicated databases. In short, I was planning on doing the following:</div>
<div>
<ul>
<li>Keep track of slaves statuses in Zookeeper. </li>
<li>Detect a failure of the master database, either by detecting a failure of the node itself, or by detecting that the MySQL process is no longer running.</li>
<li>Handle fail-over to most up-to-date slave. This would use a custom ZK leader election algorithm.</li>
<li>Manage cluster virtual IP address. When the master DB changes, the machine on which it is running can acquire the Virtual IP address assigned to the DB cluster.</li>
</ul>
It seems that the MySQL Python-based tools will provide most of these functionalities out of the box. </div>
<div>
<br /></div>
<div>
I am looking forward to trying out those new features. </div>
<div>
<br /></div>
<h3>
References</h3>
<div>
[1] <a href="http://dev.mysql.com/tech-resources/articles/mysql-5.6-replication.html">http://dev.mysql.com/tech-resources/articles/mysql-5.6-replication.html</a></div>
<div>
[2] <a href="http://drcharlesbell.blogspot.co.uk/">http://drcharlesbell.blogspot.co.uk/</a></div>
<div>
<br /></div>
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-25332081595340713402010-03-26T04:15:00.000-07:002010-03-26T06:14:59.038-07:00Introducing the MAvatar<div style="font-family: inherit;">MAvatar (or MosaicAvatar) is a Scala pet project that I have been working on lately based around Twitter. The idea is quite simple: MAvatar creates a mosaic of a user's profile picture. The mosaic itself is composed of profile images chosen among the user's friends. Essentially, for each pixel in the profile picture, the algorithm picks the friend's profile with the closest average color.</div><div style="font-family: inherit;"><br />
</div><div style="font-family: inherit;">The end result is formatted as a web page which features a zoom and displays some useful (if not hurtful) statistics to tell you which friends were the most or least used in the process of creating the mosaic. The process is entirely automated and works for any Twitter user as long as (1) her/his account is not protected and (2) she/he is actually following other users.</div><div style="font-family: inherit;"></div><div style="font-family: inherit;"><br />
</div><div style="font-family: inherit;">MAvatar is mainly built using Scala (whose XML support really wins when it comes to parsing the Twitter API's XML and to generate HTML) and features some simple jQuery and jQuery UI.</div><span style="font-family: inherit;">The source code is available from my GitHub repository at: </span><a href="http://github.com/kafecho/MosaicAvatar" style="font-family: inherit;">http://github.com/kafecho/MosaicAvatar</a><span style="font-family: inherit;">. I will be adding a Maven build file at some point.</span><br />
<div style="font-family: inherit;"><br />
</div><div style="font-family: inherit;">I do think that MAvatar is quite a fun way of seeing yourself through the eyes of your friends. </div><div style="font-family: inherit;"></div><div style="font-family: inherit;">I am currently planning to roll it out as a public web site designed using the Lift Framework, however as I am back to work from next week, this might take me a little while longer.</div><div style="font-family: inherit;"><br />
</div><div style="font-family: inherit;">And to finish, here is a MAvatar I've done before: <a href="http://mosaicavatars.s3.amazonaws.com/gbelrose.html">http://mosaicavatars.s3.amazonaws.com/gbelrose.html</a> ( I hope my least useful friends don't get offended :-) The UI has <b>only</b> been tested on Safari, Firefox and Chrome. I am not sure what it would look like on IE.</div><div style="font-family: inherit;"><br />
</div><div style="font-family: inherit;"><br />
</div>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-50191887825537287692010-02-16T11:40:00.000-08:002010-02-16T11:40:01.129-08:00French pancakes (Crèpes) recipe.Today is Shrove Tuesday, and in the UK, it is pancake day. Yum. <br />
Pancakes might be yummy but I think the French version (crèpes) are even yummier. <br />
<br />
Here is a simple recipe that is almost impossible to mess up.<br />
<br />
You will need:<br />
<ul><li>250g of flour.</li>
<li>500ml of Milk</li>
<li>2 eggs</li>
<li>A bit of ground cinnamon</li>
<li>Vanilla extract</li>
<li>A bit of grated lime rind</li>
<li>The finest brown rum you can get your hands on. I would recommend rum from Martinique, but then again, I am biased. </li>
</ul>Simply mix the eggs and the flour into a thick paste, then add the milk very very slowly till you end up with a smooth paste (there should not be lumps). Add the cinamon and vanilla and lime, and then some rum to your heart's content. I then add a little bit (2 table spoons) of sunflower oil to the mixture. And voila the mixture is ready to go.<br />
<br />
Use about half a ladle to pour the crèpe mixture on a non stick frying pan. It works better if your pan is very flat. You would need about 2 minutes per side, till nice and not too brown.<br />
<br />
Serve with whatever you like (brown sugar, cinamon, etc.....)<br />
<br />
Bon appétit.Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com2tag:blogger.com,1999:blog-1936859683699997238.post-21671529191521720492010-01-20T03:00:00.000-08:002013-03-24T09:31:01.922-07:00Time This!As I was going through the exercise of implementing the Fibonacci function in Scala, I had to add some code to measure how long each implementation (naive recursion vs. tail recursion) took. The pattern is quite simple: measure the time at the start of the operation, do some work, measure time again at the end of the operation and log the difference in a user friendly message. Although quite simple, the pattern is quite verbose, and if you use it a lot, it results in a lot of boiler plate. Scala makes life a bit easier by allowing me (and you) to write your own control structures. So I wrote one called <b>timeThis</b> which basically times (in milliseconds or nanoseconds) the execution of the block of code it contains. You use it like this:
<div class="gistLoad" data-id="5232545" id="gist-5232545"/>
And the output looks like this:
<div class="gistLoad" data-id="5232548" id="gist-5232548"/>
Here are some of the Scala features which make this possible:<br />
<ul><li>Methods can take functions as parameters: this applies to code blocks.</li>
<li>Partially applied functions: you can turn a method of any object into a function that can then be assigned to a variable (and passed around as a parameter).</li>
<li>Use a Scala <b>object</b> to store "static" methods.</li>
<li>Use Scala import to import an <b>object</b>'s methods.<br />
</li>
</ul>And here is my implementation of timeThis which makes use of the above features. I hope the comments are sufficiently self explanatory.<br />
<div class="gistLoad" data-id="5232557" id="gist-5232557"/>
The code is quite simple, and I mainly wrote it to gain a better understanding of the way Scala handles functions. If you find this interesting, I suggest you have look at <a href="http://aboisvert.github.com/stopwatch/">StopWatch</a>, an open source library to monitor your JVM applications. It has a ton of interesting features including statistics and nice graphs.
<script src="https://raw.github.com/moski/gist-Blogger/master/public/gistLoader.js" type="text/javascript"></script>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com2tag:blogger.com,1999:blog-1936859683699997238.post-23769283330202030682010-01-19T03:02:00.000-08:002010-01-19T08:00:17.485-08:00Fibonacci and tail recursion in ScalaYesterday, I went for a job interview for a start up company looking for people with experience in Scala and knowledge of functional programming in general. Among other things, I had to complete some written tests on sheets of paper, including providing an implementation of the famous Fibonacci function.<br />
<br />
First of all, I have to say that although I enjoy programming, this has not been my main occupation for the past 10 years, I am most certainly not 100% fluent in all the languages that I have used in the past. Writing code on a piece of paper reminded me of dictations exercises when I grew up: I was bound to make mistakes. I much prefer the interactive comfort of an IDE or a REPL interpreter where you can quickly rectify and learn from your mistakes.<br />
<br />
In the paper exercise, I eventually went for the naive recursive implementation. I call it naive because it did not leverage Scala support for tail recursion optimization. I knew it was possible, but I just could not think of an appropriate answer in the time that was allocated to me. <br />
<br />
As I was not happy with my paper answers, I spent some time again this morning thinking and coding the solution in Scala.<br />
<script class="brush: scala" type="syntaxhighlighter">
<![CDATA[
object Fibonacci {
/*
* Naive implementation of the Fibonacci function.
* It uses recursion by the Scala compiler can't use tail optimization as the last operation executed is not the function itself but an addition operation.
* As you run this on a big number, Scala will eventually run out of stack space.
*/
def fib( n : Int) : BigInt = n match {
case 0 => 0
case 1 => 1
case _ => fib(n-1) + fib(n-2)
}
/*
* Tail optimized version of the Fibonacci function.
* As the last operation invoked is the function itself, the compiler can replace the recursion by a loop.
*/
def fib2(n : Int ) : BigInt = tailFib(n,0L,1L)
def tailFib ( n : Int, a : BigInt, b : BigInt) : BigInt = if ( n == 0) a else tailFib( n-1, a + b, a)
def main(args : Array[String]) : Unit = {
val n = 40
var res : BigInt = 0L
var start = System.nanoTime
res = fib2(n)
var stop = System.nanoTime
println("It took " + (stop - start) + " nano secs to compute fib of " + n + " with tail recursion, result = " + res)
start = System.nanoTime
res = fib(n)
stop = System.nanoTime
println("It took " + (stop - start) + " nano secs to compute fib of " + n + " with naive recursion, result = " + res)
}
}
]]>
</script><br />
<br />
The tail recursive implementation is a lot faster than the naive recursion. See the log output below.<br />
<br />
"It took 181000 nano secs to compute fib of 40 with tail recursion, result = 102334155<br />
It took 9680325000 nano secs to compute fib of 40 with naive recursion, result = 102334155"<br />
<br />
Furthermore running the naive implementation with a large enough number just brings my CPU to a halt.Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-69974935006414790012009-12-11T02:54:00.000-08:002009-12-11T02:55:30.952-08:00Looking forward to going home.I am from the French Antilles, born in Guadeloupe and raised in Martinique, and I spent most of this morning wondering why I am still living in the UK.<br />
<br />
The weather today in Wiltshire is truly miserable. Bonus point: it is no longer raining; however the rain has been replaced by an extremely thick and cold fog. But there is a silver lining, as I am flying home tomorrow to warmer and sunnier Caribbean weather.<br />
<br />
Martinique, j'arrive...<br />
<br />
Really looking forward to it.Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-36578123503122934852009-12-09T05:58:00.000-08:002013-07-12T03:02:07.774-07:00Configuration management with ScalaConfiguration management tools are essential things to have in your toolbox if your role is to manage large scale distributed IT systems. Very good free and open source solutions such as <a href="http://reductivelabs.com/products/puppet/">Puppet</a> or <a href="http://wiki.smartfrog.org/wiki/display/sf/SmartFrog+Home">SmarFrog</a> exist for those familiar with Ruby or Java. It is not really my intention to implement yet another configuration management tool in Scala just for the sake of it. I know for a fact, having worked closely with the SmartFrog team in HP Labs Bristol, that they are rather complicated things to get right.<br />
<br />
But that said....<br />
<br />
The other day, I got thinking about Scala traits and the potential they offer for creating composable, refinable components that express configuration information and configuration logic. Other Scala features such as its type-safe nature and its package system could enable an elegant and simple way to represent configuration data and logic.<br />
<br />
Let's start with a simple example that we will gradually extend. Let's assume I want to setup a cluster of nodes where each node is configured with an admin user account.<br />
<br />
I start of with modelling a user with a User trait, a collection of properties such as the user's name, uid, etc.. The trait also contains (mock in this case) logic that can be executed to add or remove users on a computer. Notice how the user trait is itself composed from other traits. <br />
<pre class="brush: scala">
trait Deployable{
def deploy
def undeploy
}
trait Named{
var name : String = _
}
trait User extends Named with Deployable{
var uid : Short = _
override def deploy = println ("useradd -u " + uid + " " + name)
override def undeploy = println ("userdel -r " + name)
}
</pre><br />
Users are to be deployed on nodes that I've modelled with a node trait (and once again that trait itself is composed from others). A node has an IP address and provides a convenience method to add things (such as a user) to it. Things added to the node will be deployed as the node is deployed by the hypothetical deployment runtime.<br />
<pre class="brush: scala">
trait WithChildren{
var includes : List[Any] = List()
def contains (a : Any*) = a.foreach ( item => includes = item :: includes)
}
trait Node extends WithChildren with Deployable{
var ip : String = _
def deploy = println ("Actual logic to deploy children goes here.")
def undeploy = println ("Actual logic to undeploy children goes here.")
}
trait Config extends WithChildren with Deployable{
def deploy = println ("Actual logic to deploy nodes goes here.")
def undeploy = println ("Actual logic to undeploy nodes goes here.")
}
</pre><br />
The config trait models a collection of nodes. Again, in a real system, it would be the thing I actually pass to a deployment runtime for enaction.<br />
<br />
With all the pieces in place, let's see a simple configuration:<br />
<pre class="brush: scala">
object config1 extends Config{
contains{
new Node{
ip = "192.168.1.1"
contains{
new User{
name = "admin"
uid = 102
}
}
}
}
}
</pre><br />
In the configuration above, I deploy the admin user on a single node. I can refine this a little bit by subclassing the node trait to create an AdminNode trait which includes the admin user by default.<br />
<pre class="brush: scala">
object adminUser extends User{
name ="admin"
uid = 102
}
trait AdminNode extends Node{
contains{
adminUser
}
}
</pre><br />
In the snippet above, I've created an object adminUser which is a trait which has been instantiated. The adminUser can no longer be refined (subclassed) as Scala (unlike SmartFrog) is not a prototype based language. However the object can still be reused and composed into other traits. The other important thing to note is that, in Scala, when you create a trait, the logic that is executed when invoking its constructor is the entire body of the trait. So thanks to this feature, I can define new variables, change existing ones or invoke method calls within the curly braces without having to define an explicit constructor method as I would have to if using Java or Groovy. <br />
<br />
<pre class="brush: scala" >
object config2 extends Config{
contains{
new AdminNode{ ip = "192.168.1.2" }
}
}
</pre><br />
<br />
If you want to modify a particular instance of AdminNode in place let's say to add another user account, it is also easily done.<br />
<br />
<pre class="brush: scala">
object config3 extends Config{
contains{
new AdminNode{
ip = "192.168.1.3"
contains{
new User{
name = "demo"
uid = 102
}
}
}
}
}
</pre><br />
<br />
One of the benefits of using Scala directly to write the configuration is that I can use constructs such as loops to create or modify the data. For instance, let's create a set of admin nodes from a list of IP addresses. <br />
<pre class="brush: scala">
object config4 extends Config{
List("192.168.1.2","192.168.1.3","192.168.1.4").foreach{ addr=>
contains{
new AdminNode{ ip = addr}
}
}
}
</pre><br />
<br />
Just as I modelled users, I can also model applications running on nodes (again by composing traits). In the example below I model generic applications installed through packages (via a package manager a la apt-get) and controlled via Linux services.<br />
<br />
<pre class="brush: scala">
trait Package extends Deployable{
var packages : List[String] = List()
override def deploy : Unit = println (packages.foreach(s => "apt-get install " + s))
override def undeploy : Unit = println (packages.foreach(s => "apt-get remove " + s))
}
trait Services extends Runnable{
var services : List[String] = List()
override def start : Unit = println (services.foreach(s => "/etc/init.d/" + s + " start"))
override def stop : Unit = println (services.foreach(s => "/etc/init.d/" + s + " stop"))
}
</pre><br />
<br />
Using those traits, I can then model an application such as an Apache Web Server, or refine an Apache Web server into a Django application server running as an apache module.<br />
<pre class="brush: scala" >
trait WebServer{
var port = 8080
}
trait Apache2 extends Services with Package with WebServer{
packages += "apache2"
services += "apache2"
port = 80
}
trait Django extends Apache2{
packages = "libapache2-mod-python" :: "python-django" :: packages
}
</pre><br />
I can then include instances of Apache2 or Django in my hypothetical cluster.<br />
<pre class="brush: scala">
object config5 extends Config{
contains(
new AdminNode{
ip = "192.168.1.3"
contains{
new Apache2{ port = 8182 }
}
},
new AdminNode{
ip = "192.168.1.4"
contains{
new Django{ port = 87 }
}
}
)
}
</pre><br />
As the configuration information is written directly in Scala, I automatically gain access to interesting features:<br />
<ul><li>the compiler highlights syntax errors in the description</li>
<li>I can use IDE for syntax highlighting, auto completion and re-factoring</li>
<li>descriptions can be organised into packages and imported as required.</li>
</ul><br />
Obviously, as a thought experiment, this ought to be taken with a pinch of salt. I've only touched on some of the language features that are appropriate for expressing composable and reusable models of configuration data. I have not tried (and most likely wont try) to implement a distributed deployment engine that could deploy such configuration descriptions.
<script type="text/javascript">
SyntaxHighlighter.highlight();
</script>
Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com0tag:blogger.com,1999:blog-1936859683699997238.post-67644723540931026722009-12-02T05:35:00.000-08:002009-12-02T08:24:43.216-08:00A bit of Scala a day keeps Java at bay: TraitsThe title of this blog post may seem a bit controversial to some, but the more I learn about Scala, the less I feel enclined to go back to Java. I had similar feeling when learning Groovy, but then again knowing Scala, I don't think I want to go back to Groovy either.<br />
<br />
Today, I am going to cover the surface of Scala traits which can be thought as Java interfaces on steroids. Traits encapsulate fields and methods definitions in units which can be easily composed when creating classes. Scala does not support multiple class inheritance but achieves a similar result by allowing you to mix in as many traits as you want. Here is an example with 2 traits and a class that mixes them in.<br />
<br />
<script class="brush: scala" type="syntaxhighlighter">
<![CDATA[
trait TraitA{
def doYourAThing = println ("I do what As do best.")
}
trait TraitB{
def doYourBThing = println ("I am a B, and like all Bs, I am quite friendly.")
}
class C extends TraitA with TraitB
]]>
</script><br />
And then you can just invoke methods from trait A and B which have been imported in C.<br />
<br />
<script class="brush: scala" type="syntaxhighlighter">
<![CDATA[
object TestTraits extends Application{
val c = new C
c.doYourAThing
c.doYourBThing
}
]]>
</script><br />
And the output is:<br />
<script class="brush: text" type="syntaxhighlighter">
<![CDATA[
I do what As do best.
I am a B, and like all Bs, I am quite friendly.
]]>
</script><br />
A very useful property of Traits is that they are stack-able. A trait can invoke any method or field which belong to the trait or class which precedes it in the hierarchy, as long as their types are compatible. An example is probably better than a long winded explanation:<br />
<script class="brush: scala" type="syntaxhighlighter">
<![CDATA[
class Speaker{
def introduce = println ("Let's talk about Traits today.")
}
trait FrenchSpeaker extends Speaker{
override def introduce(){
println ("Bonjour")
super.introduce
}
}
trait EnglishSpeaker extends Speaker{
override def introduce(){
println ("Good morning")
super.introduce
}
}
object TestTraits extends Application{
object englishFirst extends Speaker with FrenchSpeaker with EnglishSpeaker
object frenchFirst extends Speaker with EnglishSpeaker with FrenchSpeaker
englishFirst.introduce
frenchFirst.introduce
}
]]>
</script> <br />
Which outputs:<br />
<script class="brush: text" type="syntaxhighlighter">
<![CDATA[
// English 1st
Good morning
Bonjour
Let's talk about Traits today.
// French first.
Bonjour
Good morning
Let's talk about Traits today.]]>
</script><br />
<br />
Those examples are quite simple, but I do hope they get the point across. I've used Trait in my PubSubHubBub client for composing Restlets from Traits which individually handle HTTP verbs like GET, POST, DELETE, etc.. If you want to find out more, the code is at: <a href="http://github.com/kafecho/scala-push/blob/master/src/main/scala/org/kafecho/push/Traits.scala">http://github.com/kafecho/scala-push/blob/master/src/main/scala/org/kafecho/push/Traits.scala</a>Anonymoushttp://www.blogger.com/profile/05553512413022071024noreply@blogger.com1