Testrepository roadmap 2015/16

Testrepository has been moderately successful – its very good at some of the things it aspired to (e.g. debugging sporadic test failures in parallel environments), but other angles have not really been explored.

I’ve set some time aside to correct this, in large part to facilitate some important features for tempest (which has its concurrency currently built on the meta-runner included in testrepository – and I’d like to enable the tempest authors to avoid having to write gnarly concurrency code :))

So my plan is to tackle a few things in the lead up to, and perhaps just after the Tokyo OpenStack summit. I wanted to socialise the proposed changes though, and thus this blog post.

Profiles

Firstly, a long standing issue is that when one tests several different configurations, testrepository is poor at reporting failures that are configuration specific. For instance, imagine that your test suite is run with both Python 2.7 and 3.4, and both results are loaded into your repository. If a given test ‘X’ fails in the first run, and not the second… after the second run is loaded, it will be reported as ‘passing’.

My proposed fix for this is to call the name of each such run a ‘profile’ and use tags to differentiate between the two runs. So you’d tag the 2.7 run perhaps ‘py27’ and the second ‘py34’, and then tell testrepository that the ‘py27’ and ‘py34’ tags are being used to identify profiles. After that testrepository will only consider two test to apply to the same test if the tags match. Tags that are not specified as being for profiles (e.g. the worker-N tags that the testrepository runner adds to track backends that tests run in) won’t be considered in that comparison. This well then allow testrepository to track that each run was separate and the results are not meant to replace each other. The use of tags allows for test matrices too, in principle– consider python version as one dimension, operating system version as another, and database engine as a third — it would be up to the user. I don’t plan to directly implement a matrix system in the first iteration. A different, more dynamic model is in principle possible: don’t tag things, just log events that will give clues and correlate later – thats not precluded by this tag based approach, and we can always add such a thing later.

The output for the queries of the datastore need to be updated though – we don’t currently report tags in e.g. ‘testr failing –list’. This is a little tricky: the listing format is intended to be a mix of nice-for-humans, and machine consumption. Another approach we considered was to namespace the tests with the profile. This has a couple of disadvantages: it may break an unknown number of deployments if the chosen separator is already in use by people, and secondly, it mixes structured and free-form data in a lossy way. One example of that would be that we’d start interpreting all test ids to see if they are – or are not – namespaced with a profile : thats likely to be fragile, at best. On the other hand it would very easily fit into the list format – which is why it was appealing. On balance though, the fragility and conflation would just add technical debt. Instead, we’ll do the following:

  1. Anything that needs to output a flat list of tests will output that for just one profile. An option will be added to allow querying the profiles for which results might be given. The default will start erroring with a list of available profiles if more than one profile has been specified.
  2. We’ll define a minimal JSON schema for reporting multiple profiles in such places. The excellent jq tool can be used to manipulate that in shell command lines. A command line option will opt into receiving this.

Testrepository has two very related programs inside itself. There is the data store and the various queries it can do – e.g. ‘testr load’ and ‘testr failing’. Then there is the meta-runner, which knows how to run some test processes to execute tests. While strictly speaking this is optional, its been very convenient for working with Python tests to have the meta-runner connected to testr and able to do in-process querying.

The meta-runner will benefit from being updated as well. My intent is to make it capable of running all the tests from all the profiles the user specifies, storing that as one single run in the datastore. Two commands in particular need to change here – `testr list-tests` needs to change in line with the test listing above, and `testr run –load-list` needs to be taught how to deal with multiple profiles. I plan to add a command line option to tell it that JSON is being used, and to select tests across all profiles when a simple list or a test regex is given. Finally the command line can benefit from a command line option to select one or more profiles.

Scheduling

The meta-runner has a crude scheduler – it balances based on historic performance prior to running any backend. An online scheduler will give much greater performance in both unseeded, and skewed data cases- e.g.if many long tests fail due to a bug the run after that will often have some workers finishing well before others – leading to slow test times.

The plan here is to finish the implementation of bidirectional channels to test backends, and then dispatch work to them incrementally

Concurrency plans

Tempest wants to be able to run some tests completely independently, and then others can run together arbitrarily. To facilitate this, the online scheduler will be extended to permit describing an overall plan to run through – e.g. a list of segments, where each segment describes one or more tests that can be run together. The UI to supply that to the scheduler will probably start out as a JSON file listing exact test ids and we can iterate from there based on their experience.

Time to revise the subunit protocol

Subunit is seven and a half years old now – Conrad Parker and I first sketched it up at a CodeCon – camping and coding, a brilliant combination – in mid 2005.

revno: 1
committer: Robert Collins <robertc@robertcollins.net>
timestamp: Sat 2005-08-27 15:01:20 +1000
message:  design up a protocol with kfish

It has proved remarkably resilient as a protocol – the basic nature hasn’t changed at all, even though we’ve added tags, timestamps, support for attachments of arbitrary size.

However a growing number of irritations have been building up with it. I think it is time to design another iteration of the protocol, one that will retain the positive qualities of the current protocol, while helping it become suitable for the next 7 years. Ideally we can keep compatibility and make it possible for a single stream to be represented in any format.

Existing design

The existing design is a mostly human readable line orientated protocol that can be sniffed out from the regular output of ‘make’ or other build systems. Binary attachments are done using HTTP chunking, and the parser has to maintain state about the current test, tags, timing data and test progression [a simple stack of progress counters]. How to arrange subunit output is undefined, how to select tests to run is undefined.

This makes writing a parser quite easy, and the tagging and timestamp facility allow multiplexing streams from two or more concurrent test runs into one with good fidelity – but also requires that state be buffered until the end of a test, as two tests cannot be executing at once.

Dealing with debuggers

The initial protocol was intended to support dropping into a debugger – just pass each line read through to stdout, and connect stdin to the test process, and voila, you have a working debugger connection. This works, but the current line based parsers make using it tedious – the line buffered nature of it makes feedback on what has been typed fiddly, and stdout tends to be buffered, leading to an inability to see print statements and the like.  All in-principle fixable, right ?

When running two or more test processes, which test process should stdin be connected to? What if two or more drop into a debugger at once? What is being typed to which process is more luck than anything else.

We’ve added some idioms in testrepository that control test execution by a similar but different format – one test per line to list tests, and have runners permit listing and selecting by a list. This works well, but the inconsistency with subunit itself is a little annoying – you need two parsers, and two output formats.

Good points

The current protocol is extremely easy to implement for emitters, and the arbitrary attachments and tagging features have worked extremely well. There is a comprehensive Python parser which maps everything into Python unittest API calls (an extended version of the standard, with good backwards compatibility).

Pain points

The debugging support was a total failure, and the way the parser depraminates it’s toys when a test process corrupts an outcome line is extremely frustrating. (other tests execute but the parser sees them as non-subunit chatter and passes the lines on through stdout).

Dealing with concurrency

The original design didn’t cater for concurrency. There are three concurrency issues – the corruption issue (see below for more detail) and multiplexing. Consider two levels of nested concurrency: A supervisor process such as testrepository starts 2 (or more but 2 is sufficient to reason about the issue) subsidiary worker processes (I1 and I2), each of which starts 2 subsidiary processes of their own (W1, W2, W3, W4). Each of the 4 leaf processes is outputting subunit which gets multiplexed in the 2 intermediary processes, and then again in the supervisor. Why would there be two layers? A concrete example is using testrepository to coordinate test runs on multiple machines at once, with each machine running a local testrepository to broker tests amongst the local CPUs. This could be done with 4 separate ssh sessions and no intermediaries, but that only removes a fraction of the issues. What issues?

Well, consider some stdout chatter that W1 outputs. That will get passed to I1 and from there to the supervisor and captured. But there is nothing marking the chatter as belonging to W1: there is no way to tell where it came from. If W1 happened to fail, and there was a diagnostic message printed, we’ve lost information. Or at best muddled it all up.

Secondly, buffering – imagine that a test on W1 hangs. I1 will know that W1 is running a test, but has no way to tell the supervisor (and thus the user) that this is the case, without writing to stdout [and causing a *lot* of noise if that happens a lot]. We could have I1 write to stdout only if W1’s test is taking more than 5 seconds or something – but this is a workaround for a limitation of the protocol. Adding to the confusion, the clock on W1 and W3 may be very skewed, so timestamps for everything have to be carefully synchronised by the multiplexer.

Thirdly, scheduling – if W1/W2 are on a faster machine than W3/W4 then a partition of equal-timed tests onto each machine will lead one idle before the other finishes. It would be nice to be able to pass tests to run to the faster machine when it goes idle, rather than having to start a new runner each time.

Lastly, what to do when W1 and W2 both wait for user input on stdin (e.g. passphrases, debugger input, $other). Naively connecting stdin to all processes doesn’t work well. A GUI supervisor could connect a separate fd to each of I1 and I2, but that doesn’t help when it is W1 and W2 reading from stdin.

So additional requirement over baseline subunit:

  1. make it possible for stdout and stderr output to be captured from W1 and routed through I1 to the supervisor without losing its origin. It might be chatter from a noisy test, or it might be build output. Either way, the user probably will benefit if we can capture it and show it to them later when they review the test run. The supervisor should probably show it immediately as well – the protocol doesn’t need to care about that, just make it possible.
  2. make it possible to pass information about tests that have not completed through one subunit stream while they are still incomplete.
  3. make it possible (but optional) to pass tests to run to a running process that accepts subunit.
  4. make it possible to route stdin to a specific currently process like W1. This and point 3 suggest that we need a bidirectional protocol rather than the solely unidirectional protocol we have today. I don’t know of a reliable portable way to tell when some process is seeking such input, so that will be up to the user I think. (e.g. printing (pdb) to stdout might be a sufficiently good indicator.)

Dealing with corruption

Consider the following subunit fragment:

test: foo
starting serversuccess:foo

This is a classic example of corruption: the test ‘foo’ started a server and helpfully wrote to stdout explaining that it did that, but missed the newline. As a result the success message for the test wasn’t printed on a line of its own, and the subunit parser will believe that foo never completed. Every subsequent test is then ignored. This is usually easy to identify and fix, but its a head-scratcher when it happens. Another way it can happen is when a build tool like ‘make’ runs tests in parallel, and they output subunit onto the same stdout file handle. A third way is when a build tool like make runs two separate test scripts serially, and the first one starts a test but errors hard and doesn’t finish it. That looks like:

test: foo
test: bar
success: bar

One way that this sort of corruption can be mitigated is to put subunit on it’s own file descriptor, but this has several caveats: it is harder to tunnel through things like ssh and it doesn’t solve the failing test script case.

I think it is unreasonable to require a protocol where arbitrary interleaving of bytes between different test runner streams will work – so the ‘make -j2’ case can be ignored at the wire level – though we should create a simple way to safely mux the output from such tests when the execute.

The root of the issue is that a dropped update leaves bad state in the parser and it never recovers. So some way to recover, or less state to carry in the parser, would neatly solve things. I favour reducing parser state as that should shift stateful complexity onto end nodes / complex processors, rather than being carried by every node in the transmission path.

Dependencies

Various suggestions have been made – JSON, Protobufs, etc…

A key design goal of the first subunit was a low barrier to entry. We keep that by being backward compatible, but being easy to work with for the new revision is also a worthy goal.

High level proposal

A packetised length prefixed binary protocol, with each packet containing a small signature, length, routing code, a binary timestamp in UTC, a set of UTF8 tags (active only, no negative tags), a content tag – one of (estimate + number, stdin, stdout, stderr, test- + test id), test status (one of exists/inprogress/xfail/xsuccess/success/fail/skip), an attachment name, mime type, a last-block marker and a block of bytes.

The content tags:

  • estimate – the stream is reporting how many tests are expected to run. It affects everything with the same routing code only, and replaces (doesn’t adjust) any current estimate for that routing code. A estimate packet of 0 can be used to say that a routing target has shutdown and cannot run more tests. Routing codes can be used by a subunit aware runner to separate out separate threads in a single process, or even just separate ‘TestSuite’ objects within a single test run (though doing so means that they will need to process subunit and strip packets on stdin. This supercedes the stack of progress indicators that current subunit has. estimates cannot have test status or attachments.
  • stdin/stdout/stderr: a packet of data for one of these streams. The routing code identifies the test process that the data came from/should go to in the tree of test workers. These packets cannot have test status but should have a non-empty attachment block.
  • test- + testid: a packet of data for a single test. test status may be included, as may attachment name, mime type, last-block and binary data.

Test status values are pretty ordinary. Exists is used to indicate a test that can be run when listing tests, and inprogress is used to report a test that has started but not necessarily completed.

Attachment names must be unique per routing code + testid.

So how does this line up?

Interleaving and recovery

We could dispense with interleaving and say the streams are wholly binary, or we can say that packets can start either after a \n or directly after another packet. If we say that binary-only is the approach to take, it would be possible to write a filter that would apply the newline heuristic (or even look for packet headers at every byte offset. I think mandating adjacent to a packet or after \n is a small concession to make and will avoid tools like testrepository forcing users to always configure a heuristic filter. Non-subunit content can be wrapped in subunit for forwarding (the I1 in W1->I1->Supervisor chain would do the wrapping). This won’t eliminate corruption but it will localise it and permit the stream to recover: the test that was corrupted will show up as incomplete, or with incomplete attachment data.

listing

Test listing would emit many small non-timestamped packets. It may be useful to have a wrapper packet for bulk amounts of fine grained data like listing is, or for multiplexers with many input streams that will often have multiple data packets available to write at once.

Selecting tests to run

Same as for listing – while passing regexes down to the test runner to select groups of tests is a valid use case, thats not something subunit needs to worry about : if the selection is not the result of the supervisor selecting by test id, then it is known at the start of the test run and can just be a command line parameter to the backend : subunit is relevant for passing instructions to a runner mid-execution. Because the supervisor cannot just hand out some tests and wait for the thing it ran to report that it can accept incremental tests on stdin, supervisor processes will need to be informed about that out of band.

Debugging

Debugging is straight forward . The parser can read the first 4 or so bytes of a packet one at a time to determine if it is a packet or a line of stdout/stderr, and then either read to end of line, or the binary length of the packet. So, we combine a few things; non-subunit output should be wrapped and presented to the user. Subunit that is being multiplexed and forwarded should prepend a routing code to the packet (e.g. I1 would append ‘1’ or ‘2’ to select which of W1/W2 the content came from, and then forward the packet. S would append ‘1’ or ‘2’ to indicate I1/I2 – the routing code is a path through the tree of forwarding processes). The UI the user is using needs to supply some means to switch where stdin is attached. And stdin input should be routed via stdin packets. When there is no routing code left, the packet should be entirely unwrapped and presented as raw bytes to the process in question.

Multiplexing

Very straight forward – unwrap the outer layer of the packet, add or adjust the routing code, serialise a header + adjusted length + rest of packet as-is. No buffering is needed, so the supervisor can show in-progress tests (and how long they have been running for).

Parsing / representation in Python or other languages

The parser should be very simple to write. Once parsed, this will be fundamentally different to the existing Python TestCase->TestResult API that is in used today. However it should be easy to write two adapters: old-style <-> this new-style. old-style -> new-style is useful for running existing tests suites and generating subunit, because thats way the subunit generation is transparent. new-style->old-style is useful for using existing test reporting facilities (such as junitxml or html TestResult objects) with subunit streams.

Importantly though, a new TestResult style that supports the features of this protocol would enable some useful features for regular Python test suites:

  • Concurrent tests (e.g. in multiprocessing) wouldn’t need multiplexers and special adapters – a regular single testresult with a simple mutex around it would be able to handle concurrent execution of tests, and show hung tests etc.
  • The routing of input to a particular debugger instance also applies to a simple python process running tests via multiprocessing, so the routing feature would help there.
  • The listing facility and incrementally running tests would be useful too I think – we could go to running tests concurrently with test collection happening, but this would apply to other parts of unittest than just the TestResult

The API might be something like:

class StreamingResult(object):
    def startTestRun(self):
        pass
    def stopTestRun(self):
        pass
    def estimate(self, count, route_code=None):
        pass
    def stdin(self, bytes, route_code=None):
        pass
    def stdout(self, bytes, route_code=None):
        pass
    def test(self, test_id, status, attachment_name=None, attachment_mime=None, attachment_eof=None, attachment_bytes=None):
        pass

This would support just-in-time debugging  by wiring up pdb to the stdin/stdout handlers of the result object, rather than actual stdin/stdout of the process – a simple matter once written. Alternatively, the test runner could replace sys.stdin/stdout etc with thunk file-like objects, which might be a good idea anyway to capture spurious output happening during a test run. That would permit pdb to Just Work (even if the test process is internally running concurrent tests.. until it has two pdb objects running concurrently 🙂

Generation new streams

Should be very easy in anything except shell. For shell, we can have a command line tool that when invoked outputs a subunit stream for one instruction. E.g. ‘test foo completed + some attachments’ or ‘test foo starting’.