Maintainable pyunit test suites – fixtures

So a while back I blogged about maintainable test suites. One of the things I’ve been doing since is fiddling with the heart of the fixtures concept.

To refresh your memory, I’m defining fixture as some basic state you want to reach as part of doing a test. For instance, when you’ve mocked out 2 system calls in preparation for some test code – that represent a state you want to reach. When you’ve loaded sample data into a database before running the actual code you want to make assertions about – that also represents a state you want to reach. So does simply combining three or four objects so you can run some code.

Now, there are existing frameworks in python for this sort of thing. testresources and testscenarios both go some way towards this (and I and to blame for them :)), so does the zope testrunner with layers,  and the testfixtures project has some lovely stuff as well. And this is without even mentioning py.test!

There are a few things that you need from the point of view of running a test and establishing this state:

  • You need to  be able to describe the state (e.g. using python code) that you wish to achieve.
  • The test framework needs to be able to put that state into place when running the test. (And not before because that might interfere with other tests)
  • And the state needs to be able to be cleaned up.

Large test suites or test suites dealing with various sorts of external facilities will also often want to optimise this process and put the same state into place for many tests. The (and I’m not exaggerating) terrible setUpClass and setUpModule and other similar helpers are often abused for this.

Why are they terrible? They are terrible because they are fragile; there is no (defined in the contract) way to check that the state is valid for the next test, and its common to see false passes and false failures in tests using setUpClass and similar.

So we also need some way to reuse such expensive things while still having a way to check that test isolation hasn’t been compromised.

Having looked around, I’ve come to the conclusion we’ll all benefit if there is a single core protocol for doing these things, something that can be used and built on in many different ways for many different purposes. There was nothing (that I found) that actually met all these requires and was also tasteful enough that folk might really like using it.

I give you ‘fixtures‘. Or on Launchpad. This small API is intended to be a common contract that all sorts of different higher level test libraries can build on. As such it has little to no policy or syntatic sugar.

It does have a nice core, integration with pyunit.TestCase, and I’m going to add a library of useful generic fixtures (like temporary directories, environment isolators and so on) to it. I’d be delighted to add more committers to the project, and intend to have it be both Python 2.x and 3.x compatible (if its not already – my CI machine isn’t back online after the move yet, I’m short of round tuits).

Now, if you’re writing some code like:

class MyTest(TestCase):
    def setUp(self):
        foo = Foo()
        bar = Bar()
        self.quux = Quux(Foo(), Bar())
        self.addCleanup(self.quux.done)

You can make it reusable across your code base simply by moving it into a fixture like this:

class QuuxFixture(fixtures.Fixture):
    def setUp(self):
        foo = Foo()
        bar = Bar()
        self.quux = Quux(Foo(), Bar())
        self.addCleanup(self.quux.done)

class MyTest(TestCase, fixtures.TestWithFixtures):
    def setUp(self):
        self.useFixture(QuuxFixture)

I do hope that the major frameworks (nose, py.test, unittest2, twisted) will include the useFixture glue themselves shortly; I will offer it as a patch to the code after giving it some time to settle. Further possibilities include declared fixtures for tests, and we should be able to make setUpClass better by letting fixtures installed during it get reset between tests.

6 thoughts on “Maintainable pyunit test suites – fixtures

  1. Hi,

    Firstly, yay for fixtures.

    Apropos trial, the other day I was trying to make use of them in a toy twisted
    project, and was disappointed to have to use a sync library in addition to the async
    one that I was already using in order to code the fixtures.

    Do you envisage support for async fixtures?

    Thanks,

    James

  2. You’re saying you want setUp to return a deferred, and __exit__ / cleanUp likewise, IIUC.

    so I think that you’d need the test environment to cooperate (duh, obviously:). __enter__ setUp can cleanly return a deferred; __exit__ is harder (because its defined synchronously as controlling the raising of exceptions.

    now, if cleanUp is used rather than __exit__, then that could return a deferred sensibly; and we’re only dependent on the test case knowing how to handle that.

    I’d happily accept patches to make this part of the normal-way-of-use; but we’ll need to do something not-well-defined to allow code sharing between simple fixtures and ones that obey a tested/asyncore style protocol.

    Specifically, a ‘tempdir’ fixture isn’t very useful as an deferred thing (mkdir is sync). A memcached starting fixture can usefully parallelise with a postgresql starting fixture though.

    Maybe we just define a sensible protocol and only worry when a patch for something like the actually parallelisable things above turns up.

    But with all that in mind, trial is inherently single test at a time, so its not really a significant issue IMO.

  3. It’s more that I was using a library for talking to couchdb using twisted,
    so async, but then had to make use of python-couchdb (sync) for the fixtures.

    I’m not particularly interested in parallelism, more just fitting in to the project.

    Thanks,

    James

  4. IMO setting up fixtures in setUpClass misses the point again. The good idea of fixtures is, that the environment needed for the test (i.e. fixture) is orthogonal to grouping the tests by what they test (usually in a common TestCase).

    Instead fixture to keep expensive state around needs to keep it across tearDown (checking that it’s undamaged) and next setUp, really destroying it in __del__ or using some mechanism that will be invoked when all tests finished. That way the tests sharing a fixture don’t have to be in a common TestCase.

    In fact I think TestCase should be hidden away as implementation detail similar to how Boost.Test does it in C++. The test function would be declared as non-member with a decorator taking list of fixtures. It would create ad-hoc TestCase, put the function in it, attach all the fixtures and possibly register it in a global suite. So the tests would look roughly like:

    from … import test_case;
    @testcase(foo=FooFixture)
    def test_foo(self):
    self.assert(self.foo.something(…))

    Um, thinking about it further, is there any advantage of registering fixtures with the test suite over the ‘with’ statement? It seems to me unit testing could be simplified to the point of simply running all functions called ‘test_*’ in a module. The assert* functions can be class or non-member (they don’t use any state) and with the fixtures simply using the with statement it could save quite some object creation.

    1. Yes, you can certainly use a single fixture w/expensive state kept across setUp/cleanUp : but then you need an extended protocol to say ‘hey, we’re really finished here’. In saying that you could use a fixture there, I’m noting that you can have a *single* fixture class and use it both per-test and per-group-of-tests by delegating to it when doing group-of-tests things.

      I like that example you give with decorators; it is what I’m slowing working towards – not because I don’t like TestCase : I do, but because making things low or zero boilerplate is really beneficial, and having an implicit TestCase created as you suggest will solve the last remaining thing I was mentally whinging about with this style of test writing :).

      With assertThat + Fixture & a decorator that will inject state, then I think with will be fine for 90% of uses. Some uses will still need reset (the many-tests-one-object) case, but thats ok IMO.

      The internal structure of unit test though should still have a clear TestCase object – it provides a good abstraction point for the things-to-run composite.

Leave a comment