El cheapo 10Gbps networking

I’ve been hitting the limits of gigabit ethernet at home for quite a while now, and as I spend more time working with cloud technologies this started to frustrate me.

I’d heard of other folk getting good results with second hand Infiniband cards and decided to give it a go myself.

I bought two Voltaire dual-port Infiniband adapters – a 4X SDR PCI-E x4 card. And in a 2 metre 8470 cable, and we’re in business.

There are other, more comprehensive guides around to setting this up – e.g. http://davidhunt.ie/wp/?p=2291 or http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-4.html

On ubuntu the hardware was autodetected; all I needed to do was:

modprobe ib_ipoib
sudo apt-get install opensm # on one machine

And configure /etc/network/interfaces – e.g.:

iface ib1 inet static
address 192.168.2.3
netmask 255.255.255.0
network 192.168.2.0
up echo connected >`find /sys -name mode | grep ib1`
up echo 65520 >`find /sys -name mtu | grep ib1`

With no further tuning I was able to get 2Gbps doing linear file copies via Samba, which I suspect is rather pushing the limits of my circa 2007 home server – I’ll investigate futher to identify where the bottlenecks are, but the networking itself I suspect is ok – netperf got me 6.7Gbps in a trivial test.

First experience implementing StreamResult

My last two blog posts were largely about the needs of subunit, but a key result of any protocol is how easy working with it in a high level language is.

In the weekend and evenings I’ve done an implementation of a new set of classes – StreamResult and friends – that provides:

  • Adaption to and from the existing TestResult APIs (the 2.6 and below API, 2.7 API, and the testtools extended API).
  • Multiplexing multiple streams together.
  • Adding timing data to a stream if it is absent.
  • Summarising a stream.
  • Copying a stream to multiple outputs
  • A split out API for instructing a test run to stop.
  • A simple test-at-a-time stream processor that makes it easy to just deal with tests rather than the innate complexities of an event based interface.

So far the code has been uniformly simple to write. I started with an API that included an ‘estimate’ function, which I’ve since removed – I don’t believe the complexity is justified; enumeration is not significantly more expensive than counting, and runners that want to be efficient can either not enumerate or remember the enumeration from prior runs.

The documentation in the linked pull request is a good place to start to get a handle on the API; I’d love feedback.

Next steps for me are to do a subunit protocol revision that maps to the Python API, both parser and generator and see how it feels. One wrinkle there is that the reason for doing this is to fix intrinsic limits in the existing protocol – so doing forward and backward wire protocol compatibility would defeat the point. However… we can make the output side explicitly choose a protocol version, and if we can autodetect the protocol version in the parser, even if we cannot handle mixed streams we can get the benefits of the new protocol once data has been detected. That said, I think we can start without autodetection during prototyping, and add it later. Without autodetection, programs like TestRepository will need configuration options to control what protocol variant to expect. This could be done by requiring this new protocol and providing a stream filter that can be deployed when needed.

Time to revise the subunit protocol

Subunit is seven and a half years old now – Conrad Parker and I first sketched it up at a CodeCon – camping and coding, a brilliant combination – in mid 2005.

revno: 1
committer: Robert Collins <robertc@robertcollins.net>
timestamp: Sat 2005-08-27 15:01:20 +1000
message:  design up a protocol with kfish

It has proved remarkably resilient as a protocol – the basic nature hasn’t changed at all, even though we’ve added tags, timestamps, support for attachments of arbitrary size.

However a growing number of irritations have been building up with it. I think it is time to design another iteration of the protocol, one that will retain the positive qualities of the current protocol, while helping it become suitable for the next 7 years. Ideally we can keep compatibility and make it possible for a single stream to be represented in any format.

Existing design

The existing design is a mostly human readable line orientated protocol that can be sniffed out from the regular output of ‘make’ or other build systems. Binary attachments are done using HTTP chunking, and the parser has to maintain state about the current test, tags, timing data and test progression [a simple stack of progress counters]. How to arrange subunit output is undefined, how to select tests to run is undefined.

This makes writing a parser quite easy, and the tagging and timestamp facility allow multiplexing streams from two or more concurrent test runs into one with good fidelity – but also requires that state be buffered until the end of a test, as two tests cannot be executing at once.

Dealing with debuggers

The initial protocol was intended to support dropping into a debugger – just pass each line read through to stdout, and connect stdin to the test process, and voila, you have a working debugger connection. This works, but the current line based parsers make using it tedious – the line buffered nature of it makes feedback on what has been typed fiddly, and stdout tends to be buffered, leading to an inability to see print statements and the like.  All in-principle fixable, right ?

When running two or more test processes, which test process should stdin be connected to? What if two or more drop into a debugger at once? What is being typed to which process is more luck than anything else.

We’ve added some idioms in testrepository that control test execution by a similar but different format – one test per line to list tests, and have runners permit listing and selecting by a list. This works well, but the inconsistency with subunit itself is a little annoying – you need two parsers, and two output formats.

Good points

The current protocol is extremely easy to implement for emitters, and the arbitrary attachments and tagging features have worked extremely well. There is a comprehensive Python parser which maps everything into Python unittest API calls (an extended version of the standard, with good backwards compatibility).

Pain points

The debugging support was a total failure, and the way the parser depraminates it’s toys when a test process corrupts an outcome line is extremely frustrating. (other tests execute but the parser sees them as non-subunit chatter and passes the lines on through stdout).

Dealing with concurrency

The original design didn’t cater for concurrency. There are three concurrency issues – the corruption issue (see below for more detail) and multiplexing. Consider two levels of nested concurrency: A supervisor process such as testrepository starts 2 (or more but 2 is sufficient to reason about the issue) subsidiary worker processes (I1 and I2), each of which starts 2 subsidiary processes of their own (W1, W2, W3, W4). Each of the 4 leaf processes is outputting subunit which gets multiplexed in the 2 intermediary processes, and then again in the supervisor. Why would there be two layers? A concrete example is using testrepository to coordinate test runs on multiple machines at once, with each machine running a local testrepository to broker tests amongst the local CPUs. This could be done with 4 separate ssh sessions and no intermediaries, but that only removes a fraction of the issues. What issues?

Well, consider some stdout chatter that W1 outputs. That will get passed to I1 and from there to the supervisor and captured. But there is nothing marking the chatter as belonging to W1: there is no way to tell where it came from. If W1 happened to fail, and there was a diagnostic message printed, we’ve lost information. Or at best muddled it all up.

Secondly, buffering – imagine that a test on W1 hangs. I1 will know that W1 is running a test, but has no way to tell the supervisor (and thus the user) that this is the case, without writing to stdout [and causing a *lot* of noise if that happens a lot]. We could have I1 write to stdout only if W1’s test is taking more than 5 seconds or something – but this is a workaround for a limitation of the protocol. Adding to the confusion, the clock on W1 and W3 may be very skewed, so timestamps for everything have to be carefully synchronised by the multiplexer.

Thirdly, scheduling – if W1/W2 are on a faster machine than W3/W4 then a partition of equal-timed tests onto each machine will lead one idle before the other finishes. It would be nice to be able to pass tests to run to the faster machine when it goes idle, rather than having to start a new runner each time.

Lastly, what to do when W1 and W2 both wait for user input on stdin (e.g. passphrases, debugger input, $other). Naively connecting stdin to all processes doesn’t work well. A GUI supervisor could connect a separate fd to each of I1 and I2, but that doesn’t help when it is W1 and W2 reading from stdin.

So additional requirement over baseline subunit:

  1. make it possible for stdout and stderr output to be captured from W1 and routed through I1 to the supervisor without losing its origin. It might be chatter from a noisy test, or it might be build output. Either way, the user probably will benefit if we can capture it and show it to them later when they review the test run. The supervisor should probably show it immediately as well – the protocol doesn’t need to care about that, just make it possible.
  2. make it possible to pass information about tests that have not completed through one subunit stream while they are still incomplete.
  3. make it possible (but optional) to pass tests to run to a running process that accepts subunit.
  4. make it possible to route stdin to a specific currently process like W1. This and point 3 suggest that we need a bidirectional protocol rather than the solely unidirectional protocol we have today. I don’t know of a reliable portable way to tell when some process is seeking such input, so that will be up to the user I think. (e.g. printing (pdb) to stdout might be a sufficiently good indicator.)

Dealing with corruption

Consider the following subunit fragment:

test: foo
starting serversuccess:foo

This is a classic example of corruption: the test ‘foo’ started a server and helpfully wrote to stdout explaining that it did that, but missed the newline. As a result the success message for the test wasn’t printed on a line of its own, and the subunit parser will believe that foo never completed. Every subsequent test is then ignored. This is usually easy to identify and fix, but its a head-scratcher when it happens. Another way it can happen is when a build tool like ‘make’ runs tests in parallel, and they output subunit onto the same stdout file handle. A third way is when a build tool like make runs two separate test scripts serially, and the first one starts a test but errors hard and doesn’t finish it. That looks like:

test: foo
test: bar
success: bar

One way that this sort of corruption can be mitigated is to put subunit on it’s own file descriptor, but this has several caveats: it is harder to tunnel through things like ssh and it doesn’t solve the failing test script case.

I think it is unreasonable to require a protocol where arbitrary interleaving of bytes between different test runner streams will work – so the ‘make -j2’ case can be ignored at the wire level – though we should create a simple way to safely mux the output from such tests when the execute.

The root of the issue is that a dropped update leaves bad state in the parser and it never recovers. So some way to recover, or less state to carry in the parser, would neatly solve things. I favour reducing parser state as that should shift stateful complexity onto end nodes / complex processors, rather than being carried by every node in the transmission path.

Dependencies

Various suggestions have been made – JSON, Protobufs, etc…

A key design goal of the first subunit was a low barrier to entry. We keep that by being backward compatible, but being easy to work with for the new revision is also a worthy goal.

High level proposal

A packetised length prefixed binary protocol, with each packet containing a small signature, length, routing code, a binary timestamp in UTC, a set of UTF8 tags (active only, no negative tags), a content tag – one of (estimate + number, stdin, stdout, stderr, test- + test id), test status (one of exists/inprogress/xfail/xsuccess/success/fail/skip), an attachment name, mime type, a last-block marker and a block of bytes.

The content tags:

  • estimate – the stream is reporting how many tests are expected to run. It affects everything with the same routing code only, and replaces (doesn’t adjust) any current estimate for that routing code. A estimate packet of 0 can be used to say that a routing target has shutdown and cannot run more tests. Routing codes can be used by a subunit aware runner to separate out separate threads in a single process, or even just separate ‘TestSuite’ objects within a single test run (though doing so means that they will need to process subunit and strip packets on stdin. This supercedes the stack of progress indicators that current subunit has. estimates cannot have test status or attachments.
  • stdin/stdout/stderr: a packet of data for one of these streams. The routing code identifies the test process that the data came from/should go to in the tree of test workers. These packets cannot have test status but should have a non-empty attachment block.
  • test- + testid: a packet of data for a single test. test status may be included, as may attachment name, mime type, last-block and binary data.

Test status values are pretty ordinary. Exists is used to indicate a test that can be run when listing tests, and inprogress is used to report a test that has started but not necessarily completed.

Attachment names must be unique per routing code + testid.

So how does this line up?

Interleaving and recovery

We could dispense with interleaving and say the streams are wholly binary, or we can say that packets can start either after a \n or directly after another packet. If we say that binary-only is the approach to take, it would be possible to write a filter that would apply the newline heuristic (or even look for packet headers at every byte offset. I think mandating adjacent to a packet or after \n is a small concession to make and will avoid tools like testrepository forcing users to always configure a heuristic filter. Non-subunit content can be wrapped in subunit for forwarding (the I1 in W1->I1->Supervisor chain would do the wrapping). This won’t eliminate corruption but it will localise it and permit the stream to recover: the test that was corrupted will show up as incomplete, or with incomplete attachment data.

listing

Test listing would emit many small non-timestamped packets. It may be useful to have a wrapper packet for bulk amounts of fine grained data like listing is, or for multiplexers with many input streams that will often have multiple data packets available to write at once.

Selecting tests to run

Same as for listing – while passing regexes down to the test runner to select groups of tests is a valid use case, thats not something subunit needs to worry about : if the selection is not the result of the supervisor selecting by test id, then it is known at the start of the test run and can just be a command line parameter to the backend : subunit is relevant for passing instructions to a runner mid-execution. Because the supervisor cannot just hand out some tests and wait for the thing it ran to report that it can accept incremental tests on stdin, supervisor processes will need to be informed about that out of band.

Debugging

Debugging is straight forward . The parser can read the first 4 or so bytes of a packet one at a time to determine if it is a packet or a line of stdout/stderr, and then either read to end of line, or the binary length of the packet. So, we combine a few things; non-subunit output should be wrapped and presented to the user. Subunit that is being multiplexed and forwarded should prepend a routing code to the packet (e.g. I1 would append ‘1’ or ‘2’ to select which of W1/W2 the content came from, and then forward the packet. S would append ‘1’ or ‘2’ to indicate I1/I2 – the routing code is a path through the tree of forwarding processes). The UI the user is using needs to supply some means to switch where stdin is attached. And stdin input should be routed via stdin packets. When there is no routing code left, the packet should be entirely unwrapped and presented as raw bytes to the process in question.

Multiplexing

Very straight forward – unwrap the outer layer of the packet, add or adjust the routing code, serialise a header + adjusted length + rest of packet as-is. No buffering is needed, so the supervisor can show in-progress tests (and how long they have been running for).

Parsing / representation in Python or other languages

The parser should be very simple to write. Once parsed, this will be fundamentally different to the existing Python TestCase->TestResult API that is in used today. However it should be easy to write two adapters: old-style <-> this new-style. old-style -> new-style is useful for running existing tests suites and generating subunit, because thats way the subunit generation is transparent. new-style->old-style is useful for using existing test reporting facilities (such as junitxml or html TestResult objects) with subunit streams.

Importantly though, a new TestResult style that supports the features of this protocol would enable some useful features for regular Python test suites:

  • Concurrent tests (e.g. in multiprocessing) wouldn’t need multiplexers and special adapters – a regular single testresult with a simple mutex around it would be able to handle concurrent execution of tests, and show hung tests etc.
  • The routing of input to a particular debugger instance also applies to a simple python process running tests via multiprocessing, so the routing feature would help there.
  • The listing facility and incrementally running tests would be useful too I think – we could go to running tests concurrently with test collection happening, but this would apply to other parts of unittest than just the TestResult

The API might be something like:

class StreamingResult(object):
    def startTestRun(self):
        pass
    def stopTestRun(self):
        pass
    def estimate(self, count, route_code=None):
        pass
    def stdin(self, bytes, route_code=None):
        pass
    def stdout(self, bytes, route_code=None):
        pass
    def test(self, test_id, status, attachment_name=None, attachment_mime=None, attachment_eof=None, attachment_bytes=None):
        pass

This would support just-in-time debugging  by wiring up pdb to the stdin/stdout handlers of the result object, rather than actual stdin/stdout of the process – a simple matter once written. Alternatively, the test runner could replace sys.stdin/stdout etc with thunk file-like objects, which might be a good idea anyway to capture spurious output happening during a test run. That would permit pdb to Just Work (even if the test process is internally running concurrent tests.. until it has two pdb objects running concurrently :)

Generation new streams

Should be very easy in anything except shell. For shell, we can have a command line tool that when invoked outputs a subunit stream for one instruction. E.g. ‘test foo completed + some attachments’ or ‘test foo starting’.

Multi-machine parallel testing of nova with testrepository

I recently added a formal interface to testrepository to enable cross-machine scaling of test runs. As testrepository is still a static scheduler, this isn’t perfect, but its quite a minimal interface, which makes it easy to implement. I will likely evolve it in reaction to feedback and experience.

In the long term I’d love to have a super generic tool that matches that interface, so the project VCS copy of .testr.conf can just call out to it. However I don’t yet have that, but I do have a simple by-hand implementation that I use to run nova’s tests across my personal laptop, desktop and work laptop.

Testr models this by assuming each test running process can be mapped to a single ‘instance id’ (which could be a chroot, vm, cloud instances, …) and then running one or more commands in the instance, before disposing of it.

This by hand implementation consists of 4 things:

  1. A tiny script to rsync my source directory to the relevant places before I run tests. (This takes <2seconds on my home wifi).
  2. A script to allocate instance ids (I just use ints)
  3. A script to discard them
  4. And a script to copy tempfiles onto the target machine and run a given command.

I do my testing in lxc containers, because I like my primary environment to be free of project-specific quirks and workarounds. lxc is not needed though, if you don’t want it.

So, to set this up for yourself:

  1. on each host, make an lxc container (e.g. following) http://wiki.openstack.org/DependsOnUbuntu
  2. start them all (lxc-start -n nova -d)
  3. Make SSH config entries for the lxc containers, so you can get at them remotely. (make sure your host * rules are at the end of the file otherwise the master overrides won’t work [and you might not notice for some time…]):
    Host desktop-nova.lxc
    # lxc addresses may be present on localhost too, so namespace the control
    # path to avoid connecting to the wrong container.
      ControlPath ~/.ssh/master-lxc-%r@%h:%p
      hostname 10.0.3.19
      ProxyCommand ssh 192.168.1.106 nc -q0 %h %p
    
    Host hplaptop-nova.lxc
    # lxc addresses may be present on localhost too, so namespace the control
    # path to avoid connecting to the wrong container.
      ControlPath ~/.ssh/master-lxc-%r@%h:%p
      hostname 10.0.3.244
      ProxyCommand ssh 192.168.1.116 nc -q0 %h %p
  4. make a script to copy your nova source tree to each test location. I called mine ‘sync’
    #!/bin/bash           
    cd $(dirname $0)
    echo syncing in $(pwd) 
    (rsync -a . desktop-nova.lxc:source/openstack/nova --delete-after && echo dell done) &
    (rsync -a . hplaptop-nova.lxc:source/openstack/nova --delete-after && echo hp done)
  5. Make sure you have the base directory on each location
    ssh desktop-nova.lxc mkdir -p source/openstack
    ssh hplaptop-nova.lxc mkdir -p source/openstack
  6. Sync your code over.
    ./sync
  7. And check tests run by running a few.
    ssh hplaptop-nova.lxc "cd source/openstack/nova && ./run_tests.sh compute"
    ssh hplaptop-nova.lxc "cd source/openstack/nova && ./run_tests.sh compute"

    This will check the test environment: we’re not going to be running tests on each node via run-tests or even testr (because it gets immediately meta), but if this fails, later attempts won’t work. Your test virtualenv is inside the source tree, so it is copied implicitly by the sync.

  8. Decide what concurrency you want. For me, I picked 12: I have a desktop i7 with 4 cores, and two laptops with 2 cores each, and hyperthreads are on on all of them – I’m going to set a concurrency figure of 12 – between the cores (8) and threads (16) counts, and possibly balance it more in future. A higher number assumes less contention between ALU’s and other elements of the core pipeline, and I expect quite some contention because most of nova’s unittests are CPU bound not I/O. If the test servers are not busy, I can always raise it later.
  9. Create scripts to create / dispose / execute logical worker threads.
  10. Creation. I call this ‘instance-provision’ and all it does is find the lowest ints not currently allocated and return them.
    #!/usr/bin/env python
    import os.path
    import sys
    
    if not os.path.isdir('.instances'):
        os.mkdir('.instances')
    
    running_ids = os.listdir('.instances')
    count = int(sys.argv[1])
    top = count + len(running_ids)
    ids = [str(i) for i in range(top)]
    new = set(ids) - set(running_ids)
    for id in new:
        file('.instances/%s' % id, 'w').close()
    print(' '.join(new))
  11. Disposal is easy: remove the file marking the instance as in-use.
    #!/bin/bash
    echo freeing $@
    cd .instances
    rm $@
  12. Execution is a little trickier. We need to run some commands locally, and other ones by copying in temp files that testr has setup to the machine sshing to the remote machine, cd’ing to the right directory, sourcing the virtual env, and finally running the command.
    #!/bin/bash
    instance="$(($1 % 4))"
    case $instance in
    [0]) node=
         local="true"
         ;;
    [1]) node=hplaptop-nova.lxc
         local=""
         ;;
    [2-3]) node=desktop-nova.lxc
         local=""
         ;;
    *)   echo "Unknown instance $instance" >&2
         exit 1
         ;;
    esac
    shift
    files=
    # accumulate files to copy
    while [ "--" != "$1" ]; do 
    files="$files $1"
    shift ; done 
    shift   
    if [ -n "$files" -a -z "$local" ]; then
        echo copying $files to node.
        for f in $files; do
            rsync $f $node:$(dirname $f) ;
        done
    fi  
    if [ -n "$local" ]; then
        eval $@
    else
        echo ssh to $node
        ssh $node "cd source/openstack/nova && . .venv/bin/activate && $@"
    fi
  13. Finally, tell testr how to use this. (Don’t commit this change to nova, as it would break other people). Add this to your .testr.conf.
    test_run_concurrency=echo 12
    instance_provision=./instance-provision $INSTANCE_COUNT
    instance_execute=./instance-execute $INSTANCE_ID $FILES -- $COMMAND
    instance_dispose=./instance-dispose $INSTANCE_IDS

Now, when you run testr run –parallel, it will run across your machines. Just do a ./sync before running tests to get the code out there. It is possible to wrap all of this up via automation (or to include just-in-time provisioned cloud instances), but I like the results of still rough scripts here – it strikes a good balance between effort, reliability and performance.

Edit: I spent a bit of time poking at my config – it turns out that my laptop (coming up on 3 years old now) has relatively less grunt – so I’m now running mod 8, with 0 my laptop, 1-2 my work laptop, 3-7 my desktop, and interestingly by running a proportionately overloaded set of tests I get a time reduction.

time testr run --parallel --concurrency=16
...
real 2m34.950s

Launchpads page performance report now reusable

Thanks to Corey Goldberg, one of my colleagues @ Canonical, the page performance report can now be used on regular Apache log files, rather than just the zserver trace log files that Launchpad’s middle tier generates. We use this report to identify poorly performing pages and get insight into the timing patterns of bad pages. The code lives in the Launchpad dev-utils project – instructions for checking it out and configuring it are on the wiki. If you don’t have aggregate data for your web application, I highly recommend grabbing PPR and checking it out – its very lightweight, and data is extremely useful.