Testrepository roadmap 2015/16

Testrepository has been moderately successful – its very good at some of the things it aspired to (e.g. debugging sporadic test failures in parallel environments), but other angles have not really been explored.

I’ve set some time aside to correct this, in large part to facilitate some important features for tempest (which has its concurrency currently built on the meta-runner included in testrepository – and I’d like to enable the tempest authors to avoid having to write gnarly concurrency code :))

So my plan is to tackle a few things in the lead up to, and perhaps just after the Tokyo OpenStack summit. I wanted to socialise the proposed changes though, and thus this blog post.

Profiles

Firstly, a long standing issue is that when one tests several different configurations, testrepository is poor at reporting failures that are configuration specific. For instance, imagine that your test suite is run with both Python 2.7 and 3.4, and both results are loaded into your repository. If a given test ‘X’ fails in the first run, and not the second… after the second run is loaded, it will be reported as ‘passing’.

My proposed fix for this is to call the name of each such run a ‘profile’ and use tags to differentiate between the two runs. So you’d tag the 2.7 run perhaps ‘py27’ and the second ‘py34’, and then tell testrepository that the ‘py27’ and ‘py34’ tags are being used to identify profiles. After that testrepository will only consider two test to apply to the same test if the tags match. Tags that are not specified as being for profiles (e.g. the worker-N tags that the testrepository runner adds to track backends that tests run in) won’t be considered in that comparison. This well then allow testrepository to track that each run was separate and the results are not meant to replace each other. The use of tags allows for test matrices too, in principle– consider python version as one dimension, operating system version as another, and database engine as a third — it would be up to the user. I don’t plan to directly implement a matrix system in the first iteration. A different, more dynamic model is in principle possible: don’t tag things, just log events that will give clues and correlate later – thats not precluded by this tag based approach, and we can always add such a thing later.

The output for the queries of the datastore need to be updated though – we don’t currently report tags in e.g. ‘testr failing –list’. This is a little tricky: the listing format is intended to be a mix of nice-for-humans, and machine consumption. Another approach we considered was to namespace the tests with the profile. This has a couple of disadvantages: it may break an unknown number of deployments if the chosen separator is already in use by people, and secondly, it mixes structured and free-form data in a lossy way. One example of that would be that we’d start interpreting all test ids to see if they are – or are not – namespaced with a profile : thats likely to be fragile, at best. On the other hand it would very easily fit into the list format – which is why it was appealing. On balance though, the fragility and conflation would just add technical debt. Instead, we’ll do the following:

  1. Anything that needs to output a flat list of tests will output that for just one profile. An option will be added to allow querying the profiles for which results might be given. The default will start erroring with a list of available profiles if more than one profile has been specified.
  2. We’ll define a minimal JSON schema for reporting multiple profiles in such places. The excellent jq tool can be used to manipulate that in shell command lines. A command line option will opt into receiving this.

Testrepository has two very related programs inside itself. There is the data store and the various queries it can do – e.g. ‘testr load’ and ‘testr failing’. Then there is the meta-runner, which knows how to run some test processes to execute tests. While strictly speaking this is optional, its been very convenient for working with Python tests to have the meta-runner connected to testr and able to do in-process querying.

The meta-runner will benefit from being updated as well. My intent is to make it capable of running all the tests from all the profiles the user specifies, storing that as one single run in the datastore. Two commands in particular need to change here – `testr list-tests` needs to change in line with the test listing above, and `testr run –load-list` needs to be taught how to deal with multiple profiles. I plan to add a command line option to tell it that JSON is being used, and to select tests across all profiles when a simple list or a test regex is given. Finally the command line can benefit from a command line option to select one or more profiles.

Scheduling

The meta-runner has a crude scheduler – it balances based on historic performance prior to running any backend. An online scheduler will give much greater performance in both unseeded, and skewed data cases- e.g.if many long tests fail due to a bug the run after that will often have some workers finishing well before others – leading to slow test times.

The plan here is to finish the implementation of bidirectional channels to test backends, and then dispatch work to them incrementally

Concurrency plans

Tempest wants to be able to run some tests completely independently, and then others can run together arbitrarily. To facilitate this, the online scheduler will be extended to permit describing an overall plan to run through – e.g. a list of segments, where each segment describes one or more tests that can be run together. The UI to supply that to the scheduler will probably start out as a JSON file listing exact test ids and we can iterate from there based on their experience.

Advertisements

The merits of (careful) impatience

The Python packaging ecosystem has long desired a overhaul and implementation of designed features, but it often stalls on adoption.

I think its time to propose a guiding principle for incremental change.

be carefully impatient

The cautious approach for delivering a new feature in the infrastructure looks like this:

  1. Design the change.
  2. Implement the change(s) needed, in a new major version (e.g Metadata-2.0).
  3. Wait for the new version to be the default everywhere.
  4. Tell users they can use it.

This is frankly terrible. Firstly, we cannot really identify ‘default everywhere’. We can identify ‘default in known distributions’, but behind the firewall setups may lag arbitrarily far behind. Secondly, it makes the cycle time for getting user feedback extraordinarily long: decade plus time windows. Thirdly, as a consequence, we run a large risk of running ahead of our users and delivering less good fixes and improvements than we might do if they were using our latest things and giving us feedback.

So here is how I think we should deliver things instead:

  1. Design the change with specific care that it fails closed and is opt-in.
  2. Implement the change(s) needed, in a new minor version of the tools.
  3. Tell users they can use it.

So, why do I think we can skip waiting for it to be a default?

pip, wheel and setuptools are just as able to be updated as any other Python component. If someone is installing (say) numpy via pip (or easy-install), then by definition they are willing to use things from PyPI, and pip and setuptools are in that category.

And if they are not installing via pip, then the Python packaging ecosystem does not affect them.

If we have opt-in as a design principle, then the adoption process will be bottom up: projects that are willing to say to their users ‘you need new versions of pip and setuptools’ can do so, and use the feature immediately. Projects that want to support users installing their packages with pip but aren’t willing to ask that they also upgrade their pip can hold off.

If we have fails-closed as a design principle, then when a project has opted in, and the user installing the package hasn’t upgraded their pip, things will at least fail rather than silently doing the wrong thing.

I had experience of this in Mock recently: the 1.1.0 and up releases depended on setuptools 17.1. The minimum setuptools we could have depended on (while publishing wheels) was still newer than that in Ubuntu Precise (not to mention RHEL!), so we were forcing an upgrade regardless.

This worked ok but we had two significant issues. Firstly, folk with incorrect Python paths can end up shadowing system installed packages, and for some reason ‘six’ triggered this for multiple users. Secondly, we had a number of different attempts to clearly signal the dependency, as the new features we were using did not fail closed: they were silently ignored by sufficiently old setuptools.

We ended up with a setup_requires="setuptools>17.1" clause in setup.py, which we’re hopeful will fail, or Just Work, consistently.

Improving dependency handling upstream (for openstack)

This is, in part, a follow up to my post a few weeks ago.

I want to touch on the things we need to improve to have robust plumbing supporting openstack’s CI and devstack needs.

Extras

We want to be able to use ‘extras’ to declare the dependencies needed for different backends. This is a setuptools requirement syntax where a project can advertise additional dependencies for different use cases, which users (or other depending projects) can then trigger using '[]'. E.g. 'pip install requests[security]' says ‘install requests and the additional ‘security’ extras. We don’t know yet whether we will use 'nova[mysql]' or 'nova oslo.db[mysql]', but something like that. To use this we need to:

  1. teach pbr about reflecting requirements into the 'extras_require' keyword to setup (because while setuptools supports it in setup.py, we want a constant value setup.py with everything about individual projects declarative).  James Polley has a patch for pbr.
  2. Fix pip to handle 'pip install ./nova[mysql]'. This is issue 1236 – which has an open PR that may fix it. We should help review and test it.

Testing different setups may well need a similar facility, but its not clear yet how to best express that. We may need to standardise on using an extra called 'test' and just ensure our tox.ini knows to install that. That would be nice anyway, to get away from having to know about 'test_requirements.txt'.

pip dependency resolution

Currently pip has a very straight forward resolution algorithm: Only user supplied requirements can conflict at all, and the first mention of any distribution causes a distribution to be selected that matches that mention – all other mentions are simply ignored. This is issue 988, and its one of a cluster that affect OpenStack. The impact on OpenStack is that we have things install ok with pip, and then break in CI, because an incompatible version is installed. I have a patch up for this. Early adopters solicited!

incremental installations need dependency resolution

Say you’ve installed Neutron, which depends on oslo.db >=1.10. And you then install an older Nova which depends on oslo.db <1.10. What should happen? Ideally an error in this case, because the requirements are disjoint. And if they do overlap, the installed version should be adjusted to be compatible. Right now, no error occurs and oslo.db will be downgraded breaking Neutron. This is pip issue 2687. Currently no-one is working on this, and since it requires dependency resolution, fixing 988 first makes a lot of sense. It should be possible to at least make things error with a much more shallow patch though, if someone wished to work on it right now – or you could build on top of my resolver branch. This has also been a cause of numerous CI failures when we do releases, typically right around the time the servers branch. One thing that might be nice for us, since we know a full set of working packages, is to be able to say upfront to pip what versions are compatible, and then let only the needed things be brought in. pip issue 2731

PEP-426 environment markers need polish

PEP-426 introduced a micro-language for describing the situations when a particular dependency applies. For instance, to use argparse on Python < 2.7, you can say "python_version<'2.7'" as a marker for the argparse entry in your requirements. But there are some rough edges.

  • Some comparison operators are missing.
  • The documentation and user guidance needs improvement.
  • Environment markers can’t be used inside individual requirements, only as a filter on extra_requires. To express the argparse example above today (using a working operator), you need to pass the following to setup().
    extra_requires={':python_version=="2.6"': ['argparse']}

    It would be more straightforward to permit the syntax pip supports, where each requirement can be annotated with a marker.

    install_requires=['argparse:python_version=="2.6"']

    This might be setuptools issue 353.

  • pbr doesn’t reflect environment markers from its input files (requirements.txt etc) into setup keyword argument. James Polley has a patch for this (the same one enabling extras support in setup.cfg).

pip handling setup_requires

We run into setup_requires in two places in OpenStack; firstly we use that ourselves for pbr, but to avoid triggering easy_install we manually install pbr everywhere ourselves. Secondly, projects that are in the transitive dependencies of OpenStack use setup_requires, and we end up triggering easy_install for them. easy_install is a concern for us because of the decreased reliability and issues with corporate egress firewalls, and its security is not as robust as pips – and there’s no reason it should be, with pip being such a good tool.

However pip can’t handle setup_requires today. Doing so requires changes to setuptools and to pip.

  • setuptools needs some way to report to pip what the setup_requires are without triggering easy_install. Ronny Pfannschmidt has mentioned he may be working on this, but I’m not sure if there is a patch ready or not. A possible further enhancement would be to put the setup_requires in setup.cfg in a totally declarative fashion, but this may require environment marker support first, since the current procedural approach is very flexible and can take Python version and platform into account.
  • pip needs to be able to temporarily put things that it won’t be installing into the PYTHONPATH for packages it is building. The current internals are not suited for this (the target and source and needs of requirements being downloaded are all confounded). However once my resolver patch lands, there will be a nice cache layer that can deliver a ready-to-install directory for any requirement, which should make a simple recursive implementation quite reasonable. The resolver work will probably need further refactoring to make the resolver be decoupled from the user supplied requirements, but compared to the ground already covered, that should be straight forward. One thing folk tackling this should be aware of is an open question around location requirements. Say someone is installing foo from a git repository. And foo is also a setup requirement of some other package bar being installed at the same time. Should that foo from git be used for the setup of bar? I’m not sure of the answer (what if the version of foo is incompatible with version bar needs?) – but one is needed :).

So thats about it – if you’re interested in helping the plumbing that supports OpenStacks CI and devstack systems, please pick one of these issues and help out. Test patches, review code, write a patch, or just tell me why we don’t need to do something 🙂