The merits of (careful) impatience

The Python packaging ecosystem has long desired a overhaul and implementation of designed features, but it often stalls on adoption.

I think its time to propose a guiding principle for incremental change.

be carefully impatient

The cautious approach for delivering a new feature in the infrastructure looks like this:

  1. Design the change.
  2. Implement the change(s) needed, in a new major version (e.g Metadata-2.0).
  3. Wait for the new version to be the default everywhere.
  4. Tell users they can use it.

This is frankly terrible. Firstly, we cannot really identify ‘default everywhere’. We can identify ‘default in known distributions’, but behind the firewall setups may lag arbitrarily far behind. Secondly, it makes the cycle time for getting user feedback extraordinarily long: decade plus time windows. Thirdly, as a consequence, we run a large risk of running ahead of our users and delivering less good fixes and improvements than we might do if they were using our latest things and giving us feedback.

So here is how I think we should deliver things instead:

  1. Design the change with specific care that it fails closed and is opt-in.
  2. Implement the change(s) needed, in a new minor version of the tools.
  3. Tell users they can use it.

So, why do I think we can skip waiting for it to be a default?

pip, wheel and setuptools are just as able to be updated as any other Python component. If someone is installing (say) numpy via pip (or easy-install), then by definition they are willing to use things from PyPI, and pip and setuptools are in that category.

And if they are not installing via pip, then the Python packaging ecosystem does not affect them.

If we have opt-in as a design principle, then the adoption process will be bottom up: projects that are willing to say to their users ‘you need new versions of pip and setuptools’ can do so, and use the feature immediately. Projects that want to support users installing their packages with pip but aren’t willing to ask that they also upgrade their pip can hold off.

If we have fails-closed as a design principle, then when a project has opted in, and the user installing the package hasn’t upgraded their pip, things will at least fail rather than silently doing the wrong thing.

I had experience of this in Mock recently: the 1.1.0 and up releases depended on setuptools 17.1. The minimum setuptools we could have depended on (while publishing wheels) was still newer than that in Ubuntu Precise (not to mention RHEL!), so we were forcing an upgrade regardless.

This worked ok but we had two significant issues. Firstly, folk with incorrect Python paths can end up shadowing system installed packages, and for some reason ‘six’ triggered this for multiple users. Secondly, we had a number of different attempts to clearly signal the dependency, as the new features we were using did not fail closed: they were silently ignored by sufficiently old setuptools.

We ended up with a setup_requires="setuptools>17.1" clause in, which we’re hopeful will fail, or Just Work, consistently.

Improving dependency handling upstream (for openstack)

This is, in part, a follow up to my post a few weeks ago.

I want to touch on the things we need to improve to have robust plumbing supporting openstack’s CI and devstack needs.


We want to be able to use ‘extras’ to declare the dependencies needed for different backends. This is a setuptools requirement syntax where a project can advertise additional dependencies for different use cases, which users (or other depending projects) can then trigger using '[]'. E.g. 'pip install requests[security]' says ‘install requests and the additional ‘security’ extras. We don’t know yet whether we will use 'nova[mysql]' or 'nova oslo.db[mysql]', but something like that. To use this we need to:

  1. teach pbr about reflecting requirements into the 'extras_require' keyword to setup (because while setuptools supports it in, we want a constant value with everything about individual projects declarative).  James Polley has a patch for pbr.
  2. Fix pip to handle 'pip install ./nova[mysql]'. This is issue 1236 – which has an open PR that may fix it. We should help review and test it.

Testing different setups may well need a similar facility, but its not clear yet how to best express that. We may need to standardise on using an extra called 'test' and just ensure our tox.ini knows to install that. That would be nice anyway, to get away from having to know about 'test_requirements.txt'.

pip dependency resolution

Currently pip has a very straight forward resolution algorithm: Only user supplied requirements can conflict at all, and the first mention of any distribution causes a distribution to be selected that matches that mention – all other mentions are simply ignored. This is issue 988, and its one of a cluster that affect OpenStack. The impact on OpenStack is that we have things install ok with pip, and then break in CI, because an incompatible version is installed. I have a patch up for this. Early adopters solicited!

incremental installations need dependency resolution

Say you’ve installed Neutron, which depends on oslo.db >=1.10. And you then install an older Nova which depends on oslo.db <1.10. What should happen? Ideally an error in this case, because the requirements are disjoint. And if they do overlap, the installed version should be adjusted to be compatible. Right now, no error occurs and oslo.db will be downgraded breaking Neutron. This is pip issue 2687. Currently no-one is working on this, and since it requires dependency resolution, fixing 988 first makes a lot of sense. It should be possible to at least make things error with a much more shallow patch though, if someone wished to work on it right now – or you could build on top of my resolver branch. This has also been a cause of numerous CI failures when we do releases, typically right around the time the servers branch. One thing that might be nice for us, since we know a full set of working packages, is to be able to say upfront to pip what versions are compatible, and then let only the needed things be brought in. pip issue 2731

PEP-426 environment markers need polish

PEP-426 introduced a micro-language for describing the situations when a particular dependency applies. For instance, to use argparse on Python < 2.7, you can say "python_version<'2.7'" as a marker for the argparse entry in your requirements. But there are some rough edges.

  • Some comparison operators are missing.
  • The documentation and user guidance needs improvement.
  • Environment markers can’t be used inside individual requirements, only as a filter on extra_requires. To express the argparse example above today (using a working operator), you need to pass the following to setup().
    extra_requires={':python_version=="2.6"': ['argparse']}

    It would be more straightforward to permit the syntax pip supports, where each requirement can be annotated with a marker.


    This might be setuptools issue 353.

  • pbr doesn’t reflect environment markers from its input files (requirements.txt etc) into setup keyword argument. James Polley has a patch for this (the same one enabling extras support in setup.cfg).

pip handling setup_requires

We run into setup_requires in two places in OpenStack; firstly we use that ourselves for pbr, but to avoid triggering easy_install we manually install pbr everywhere ourselves. Secondly, projects that are in the transitive dependencies of OpenStack use setup_requires, and we end up triggering easy_install for them. easy_install is a concern for us because of the decreased reliability and issues with corporate egress firewalls, and its security is not as robust as pips – and there’s no reason it should be, with pip being such a good tool.

However pip can’t handle setup_requires today. Doing so requires changes to setuptools and to pip.

  • setuptools needs some way to report to pip what the setup_requires are without triggering easy_install. Ronny Pfannschmidt has mentioned he may be working on this, but I’m not sure if there is a patch ready or not. A possible further enhancement would be to put the setup_requires in setup.cfg in a totally declarative fashion, but this may require environment marker support first, since the current procedural approach is very flexible and can take Python version and platform into account.
  • pip needs to be able to temporarily put things that it won’t be installing into the PYTHONPATH for packages it is building. The current internals are not suited for this (the target and source and needs of requirements being downloaded are all confounded). However once my resolver patch lands, there will be a nice cache layer that can deliver a ready-to-install directory for any requirement, which should make a simple recursive implementation quite reasonable. The resolver work will probably need further refactoring to make the resolver be decoupled from the user supplied requirements, but compared to the ground already covered, that should be straight forward. One thing folk tackling this should be aware of is an open question around location requirements. Say someone is installing foo from a git repository. And foo is also a setup requirement of some other package bar being installed at the same time. Should that foo from git be used for the setup of bar? I’m not sure of the answer (what if the version of foo is incompatible with version bar needs?) – but one is needed :).

So thats about it – if you’re interested in helping the plumbing that supports OpenStacks CI and devstack systems, please pick one of these issues and help out. Test patches, review code, write a patch, or just tell me why we don’t need to do something :)

Dealing with deps in OpenStack

We’ve got a problem in OpenStack.. dependency management.

In this post I explore it as input to the design summit session on this in Vancouver.


We have some goals that are broadly agreed:

  1. Guarantee co-installability of a single release of OpenStack
  2. Be able to deliver known-good installs of OpenStack at any point in time – e.g. ‘this is known to work’
  3. Deliver good, clear dependency metadata to redistributors
  4. Support CD deployments of OpenStack from git. Both production and devstack for developers to hack on/with
  5. Avoid firedrills in CI – both internal situations where we run incompatible things we produced, and external situations where some dependency releases a broken version, like the pycparsing one last week
  6. Deployments using the Python dependencies should be up to date and secure
  7. Support doing upgrades in the same Python environment


And we have some baseline assumptions:

  1. We cooperate with the Python ecosystem – publishing our libraries to PyPI for instance
  2. Every commit of server projects is a ‘release’ from the perspective of e.g. schema management
  3. Other things release when they release, not per-commit

The current approach uses a single global list of acceptable install-requires for all our projects, and then merges that into the git trees being tested during the test. Note in particular that this doesn’t take place for things not being tested, which we install from PyPI. We create a branch of that global list for each stable release, and we also create branches of nearly everything when we do the stable release, a system that has evolved in part due to the issues in CI when new releases would break stable releases. These new branches have tightly defined constraints – e.g. “DEP >= version-at-this-release < next-point-release”‘. The idea behind this is that if the transitive closure of deps is constrained, we can install from PyPI such a version, and it won’t bring in a different version. One of the reasons we needed that was PIP bug 988, where pip takes the first occurrence of a dependency, and so servers would depend on oslo.utils which would depend on an unversioned cliff or some such, and if cliff wasn’t already installed we’d get the next releases cliff. Now – semver says we’re keeping those things compatible, but mistakes happen, and for stable branches there’s really little reason to upgrade.


We have some practical issues with the current system:

  1. Just one dependency uncapped anywhere in the wider ecosystem (including packages outside of OpenStack) that depends on a dependency that we wanted to stay unchanged, and if that dep is encountered first by the pip scanner… game over. Worse, there are components out there that introspect the installed dependencies and fail hard if one is not listed as compatible, which takes a ‘testing with unexpected version’ situation and makes it a hard error
  2. We have to run stable branches for everything, even things like OpenStackClient which are intended for end users, and are aimed at a semver rather than branched release model
  3. Due to PIP bug 2687 each time we call pip may introduce the skew that breaks the gate
  4. We don’t deliver goal 1:- because we override the requirements at test time, the actual co-installability may be different, and we don’t know
  5. We deliver goal 2 but its hard to use:- you have to dig through a specific CI log, and if the CI system has pruned it, you’re toast
  6. We don’t avoid external firedrills:- because most of our external dependencies are broad, external releases break us trivially and frequently
  7. Lastly, our requirements are too tight to support upgrades: if bug 2687 was fixed, installing the first upgraded server component would error because its requirements are declared as being incompatible with the last release.

We do deliver goals 3,4 and 6 though, which is good.

So what can we do differently? In an ideal world, can we get all 6 goals?


I think we can. Here’s one way it could work:

  1. We fix the two pip bugs above (I’m working on that now)
  2. We teach pip about constraints *if* something is requested without actually requesting it
  3. We change our project overrides in CI to use a single constraints file rather than merging into each projects requirements
  4. The single constraints file would be exactly specified: “DEP == VERSION”, not semver or compatible matched.
  5. We make changes to the single constraints file by running a proposed set of constraints
  6. We find out that we should change the constraints file by having a periodic task which compares the constraints file to the published versions on  PyPI and proposes changes to the constraints repository automatically
  7. We loosen up the constraints in all our release branches to permit upgrade co-installability

And some optional bits…

  1. We could start testing new-library old-servers again
  2. We could potentially change our branching strategy for non-server components, but I don’t think it harms things – it may just be unnecessary
  3. We could add periodic jobs for testing with unreleased versions of dependencies

Working through each point. Bug 988 causes compatible requirements to be ignored – if we have one constraint of “X > 1.4” and another of “X > 1.3, !=1.5.1” but the “> 1.4” constraint is encountered first, we can end up with 1.5.1 installed, violating a known-bad constraint. Fixing this means that rather than having to have global knowledge of deps at the point where pip is being entered, we can have local knowledge about compatible versions in each package, and as long as the union of requirements is satisfiable, we’ll be ok. Bug 2687 causes the constraints that thing A had when it was installed by pip be ignored by the requirements checking for thing B. For instance, pip install python-openstackclient after pip install nova, will meet python-openstackclient’s requirements even if that means breaking nova’s requirements.

The reason we can’t just use a requirements file today, is that a requirements file specifies what needs to be installed as well as what versions are acceptable. We don’t want devstack, when configured for nova-network, to install neutron dependencies. But it would today unless we put in place a bunch of complex processing logic. Whereas pip could do this very easily internally.

Merging each requirement into things we’re installing from git fails when we install releases – e.g. of client libraries, in particular because of the interactions with bug 988 above. A single constraints file could include all known good versions of everything we might use, and would apply globally in concert with local project requirements. Best of both worlds, in theory :)

The use of inexact versions is a hard limitation today – we can’t upgrade multiple project trees local version needs atomically, and because we’re supplying all the version constraints in one place – the project’s merged install_requirements – they have to be broad enough to co-exist during changes to the requirements, and to remain co-installed during upgrades from release to release of OpenStack. But inexact versions leads to variation in CI – every single run becomes a gamble. The primary goal of CI is to tell  us whether a new commit X meets all of our quality criteria – change one thing at a time. Running with every new version of every dependency doesn’t tell us more about X, it tells us about ecosystem things. Using exact constraints will solve this: we’ll decouple ‘update dependencies’ or ‘pycparsing Y is broken’ from testing X – e.g. ‘improve nova cells’.

We need to be able to update those dependencies though, and the existing global requirements mechanisms are pretty much right, they just need to work with a constraints file instead of patching each repo at test time. We will still want to check that the local requirements are compatible with the global constraints file.

One of the big holes such approaches have is that we may miss out on important improvements – security, performance or just plain old features – if we don’t update our constraints. So we need to be on top of that. A small amount of automation can give us a lot of assistance on that. Just try the new versions and if they work – great. If they don’t, show a failing proposal where we can assess what to do.

As I mentioned earlier today we can’t actually upgrade: kilo’s version locks exclude liberty versions of our libraries, meaning that trying to upgrade nova/kilo to nova/liberty will bring in library versions that conflict with the version deps neutron expresses. We need to open up the project local requirements to avoid this – and we also need to make some guarantees about compatibility with our prior release in our library development (otherwise rebooting a server with only one component upgraded will be a gamble).

Making those guarantees will either require testing every commit against the prior server, or if we can find some way of doing it, testing proposed releases against the prior servers – which would allow more latitude during development of our libraries. The use of constraints files will give us hermetic insulation against bad releases though – we’ll be able to stay productive while we fix the issue and issue a new better release. The crucial thing is to have a tight feedback loop though – so I’m in favour of us either testing each commit against last-stable, or figuring out the ‘tests before releases’ logic (perhaps by removing direct tag access and instead having a thing we propose the intent to as a review).

All this might be enough that we choose to make less stable branches of libraries and go back to plain semver – but its not a requirement: thats something we can discuss in detail if people care, or just wait and see what the overheads and benefits of keeping those branches are.

Lastly, this new structure will make it possible, if we want to, to test that unreleased versions of external dependencies work with a given component, by using a periodic job. Why periodic? There are two sides to each dependencies, and neither side would want their gate to wedge if an accident breaks the other side. E.g. using two of our own components – oslo.messaging and nova. oslo.messaging releases must not break nova, but an individual oslo.messaging commit isn’t necessarily constrained (if we have the before-release testing described above). External dependencies are exactly the same, except even less closely aligned than intra-OpenStack components. So running tests with a git version of e.g. libvirt in a periodic job might give us (and libvirt) valuable prior warning about issues.

Why platform specific package systems exist and won’t go away

A while back mdz blogged about challenges facing Ubuntu and other Linux distributions. He raises the point that runtime libraries for Python / Ruby etc have a unique set of issues because they tend to have their own packaging systems. Merely a month later he attended Debconf 2010 where a presentation was given on the issues that Java packages have on Dpkg based systems. Since then the conversation seems to have dried up. I’ve been reminded of it recently in discussions within Canonical looking at how we deploy web services.

Matt suggested some ways forward, including:

  • Decouple applications from the core
  • Treat data as a service (rather than packages) – get data live from the web rather than going web -> distro-package -> user machines.
  • Simplify integration between packaging systems (including non-packaged things)

I think its time we revisit and expand on those points. Nothing much has changed in how Ubuntu or other distributions approach integration with other packaging systems… but the world has kept evolving. Internet access is growing ever more ubiquitous, more platforms are building packaging systems – clojure, scala, node.js, to name but three, and a substantial and ever growing number of products expect to operate in a hybrid fashion with an evolving web service plus a local client which is kept up to date via package updates. Twitter, Facebook and Google Plus are three such products. Android has demonstrated a large scale app store on top of Linux, with its own custom packaging format.

In order to expand them, we need some background context on the use cases that these different packaging systems need to support.

Platforms such as antivirus scanners, node.js, Python, Clojure and so forth care a great deal about getting their software out to their users. They care about making it extremely easy to get the latest and greatest versions of their libraries. I say this because the evidence is all around us: every successful development community / product has built a targeted package management system which layers on top of Windows, and Mac OSX, and *nux. The only rational explanation I can come up for this behaviour is that the lower level operating system package management tools don’t deliver what they need. E.g. this isn’t as shallow as wanting a packaging system written in their own language, which would be easy to write off as parochialism rather than a thoughtful solution to their problems.

In general packaging systems provide a language for shipping source or binary form, from one or more repositories, to users machines. They may support replications, and they may support multiple operating systems. They generally end up as graph traversal engines, pulling in dependencies of various sorts – you can see the DOAP specification for an attempt at generic modelling of this. One problem that turns up rapidly when dealing with Linux distribution package managers is that the versions upstream packages have, and the versions a package has in e.g. Debian, differ. They differ because at some stage, someone will need to do a new package for the distribution when no upstream change has been made. This might be to apply a local patch, or it might be to correct a defect caused by a broken build server. Whatever the cause, there is a many to one relationship between the package versions that end users see via dpkg / rpm etc, and those that upstream ship. It is a near certainty that once this happens to a library package, that comparing package versions across different distribution packages becomes hard. You cannot reliably infer whether a given package version is sufficient as a dependency or not, when comparing binary packages between Red Hat and Debian. Or Debian and Ubuntu. The result of this is that even when the software (e.g. rpm) is available on multiple distributions (say Ubuntu and RHEL), or even on multiple operating systems (say Ubuntu and Windows), that many packages will /have/ to be targeted specifically to build and execute properly. (Obviously, compilation has to proceed separately for different architectures, its more the depedency metadata that says ‘and build with version X of dependency Y’ that has to be customised).

The result of this is that there is to the best of my knowledge no distribution of binary packages that targets Debian/Ubuntu and RHEL and Suse and Windows and Mac OS X, although there are vibrant communities building distributions of and for each in isolation. Some of the ports systems come close, but they are still focused on delivering to a small number of platforms. There’s nothing that gives 99% coverage of users. And that means that to reach all their users, they have to write or adopt a new system. For any platform X, there is a strong pressure to have the platform be maintainable by folk that primarily work with X itself, or with the language that X is written in. Consider Python – there is strong pressure to use C, or Python, and nothing else, for any tools – that is somewhat parochial, but also just good engineering – reducing variables and making the system more likely to be well maintained. The closest system I know of – Steam – is just now porting to Ubuntu (and perhaps Linux in general), and has reached its massive popularity by focusing entirely on applications for Windows, with Mac OSX a recent addition.

Systems like pypi which have multi platform eggs do target the wide range of platforms I listed above, but they do so both narrowly and haphazardly: whether a binary or source package is available for a given platform is up to the maintainer of the package, and the packages themselves are dealing with a very narrow subset of the platforms complexity: Python provides compilation logic, they don’t create generic C libraries with stable ABI’s for use by other programs, they don’t have turing complete scripts for dealing with configuration file management and so forth. Anti virus updaters similarly narrow the problem they deal with, and add constraints on latency- updates of anti virus signatures are time sensitive when a new rapidly spreading threat is detected.

A minor point, but it adds to the friction of considering a single packaging tool for all needs is the different use cases of low level package management tools like dpkg or rpm vs the use cases that e.g. pypi has. A primary use case for packages on pypi is for them to be used by people that are not machine administrators. They don’t have root, and don’t want it. Contrast that with dpkg or rpm where the primary use case (to date) is the installation of system wide libraries and tools. Things like man page installation don’t make any sense for non-system-wide package systems, whereas they are a primary feature for e.g. dpkg.

In short, the per-platform/language tools are (generally):

  1. Written in languages that are familiar to the consumers of the tools.
  2. Targeted at use on top of existing platforms, by non-privileged users, and where temporary breakage is fine.
  3. Intended to get the software packaged in them onto widely disparate operating systems.
  4. Very narrow – they make huge assumptions about how things can fit together, which their specific language/toolchain permits, and don’t generalise beyond that.
  5. Don’t provide for security updates in any specific form: that is left up to folk that ship individual things within the manager.

operating system package managers:

  1. Are written in languages which are very easy to bootstrap onto an architecture, and to deploy onto bare metal (as part of installation).
  2. Designed for delivering system components, and to avoid be able to upgrade the toolchain itself safely.
  3. Originally built to install onto one operating system, ports to other operating systems are usually fragile and only adopted in niche.
  4. Are hugely broad – they install data, scripts, binaries, and need to know about late binding, system caches etc for every binary and runtime format the operating system supports
  5. Make special provision to allow security updates to be installed in a low latency fashion, without requiring anything consuming the package that is updated to change [but usually force-uninstalling anything that is super-tightly coupled to a library version].

Anti virus package managers:

  1. Exist to update daemons that run with system wide escalated privileges, or even file system layer drivers.
  2. Update datasets in realtime.
  3. Without permitting updates that are produced by third parties.

Given that, lets look at the routes Matt suggested…

Decoupling applications from the core as a strategy makes an assumption – that the core and applications are partitionable. If they are not, then applications and the core will share common elements that need to be updated together. Consider, for instance,  a Python application. If you run with a system installed Python, and it is built without zlib for some reason, but the Python application requires zlib, you have a problem. A classic example of this problem is facing Ubuntu today, with all the system provided tools moving to Python 3, but vast swathes of Python applications still being unported to Python 3 at all. Currently, the Python packaging system – virtualenv/buildout + distribute – don’t provide a way to install the Python runtime itself, but will happily install their own components for everything up the stack from the runtime. Ubuntu makes extensive use of Python for its own tools, so the system Python has a lot of packages installed which buildout etc cannot ignore – this often leads to issues with e.g. buildout, when the bootstrap environment has (say) zope.interfaces, but its then not accessible from the built-out environment that disables the standard sys.path (to achieve more robust separation). If we want to pursue decoupling, whether we build a new package manager or use e.g. virtualenv (or gem or npm or …), we’ll need to be aware of this issue – and perhaps offer, for an extended time, a dedicated no-frills, no-distro-packages install, to avoid it, and to allow an extended supported period for application authors without committing to a massive, distro sponsored porting effort. While its tempting to say we should install pip/npm/lein/maven and other external package systems, this is actually risky: they often evolve sufficiently fast that Ubuntu will be delivering an old, incompatible version of the tool to users well before Ubuntu goes out of support, or even befor the next release of Ubuntu.

Treating data as a service. All the cases I’ve seen so far of applications grabbing datasets from the web have depended on web infrastructure for validating the dataset. E.g. SSL certificates, or SSL + content checksums. Basically, small self-rolled distribution systems. I’m likely ignorant of details here, and I depend on you, dear reader, to edumacate me. There is potential value in having data repackaged, when our packaging system has behind-firewall support, and the adhoc system that (for instance) a virus scanner system has does not. In this case, I specifically mean the problem of updated a machine which has no internet access, not even via a proxy. The challenge I see it is again the cross platform issue: The vendor will be supporting Ubuntu + Debian + RHEL + Suse, and from their perspective its probably cheaper to roll their own solution than to directly support dpkg + rpm + whatever Apple offer + Windows – the skills to roll an adhoc distribution tool are more common than the skills to integrate closely with dpkg or rpm…

What about creating a set of interfaces for talking to dpkg / rpm / the system packagers on Windows and Mac OSX ? Here I think there is some promise, but it needs – as Matt said – careful thought. PackageKit isn’t sufficient, at least today.

There are, I think, two specific cases to cater to:

  1. The anti-virus / fresh data set case.
  2. The egg/gem/npm/ specific case.

For the egg/gem/npm case, we would need to support a pretty large set of common functionality, on Windows/Mac OSX / *nux (because otherwise upstream won’t adopt what we create: losing 90% of their users (windows) or 5% (mac) isn’t going to be well accepted :) . We’d need to support multiple installations (because of mutually incompatible dependencies between applications), and we’d need to support multiple language bindings in some fashion – some approachable fashion where the upstream will feel capable of fixing and tweaking what we offer. We’re going to need to support offline updates, replication, local builds, local repositories, and various signing strategies – to match the various tradeoffs made by the upstream tools.

For the anti-virus / fresh data case, we’d need to support a similar set of operating systems, though I strongly suspect that there would be more tolerance for limited support – in that most things in that space either have very platform specific code, or they are just a large-scale form of the egg/gem/npm problem, which also wants easy updates.

What next?

We should validate this discussion with at least two or three upstreams. Find out whats missing – I suspect a lot – and whats wrong – I hope not much :). Then we’ll be in a position to decide if there is a tractable, widespread solution *possible*.

Separately, we should stop fighting with upstreams that have their own packaging systems. They are satisfying different use cases than our core distro packaging systems are designed to solve. We should stop mindlessly repackaging things from e.g. eggs to debs, unless we need that specific thing as part of the transitive runtime or buildtime dependencies for the distribution itself. In particular, if us folk that build system packaging tools adopt and use the upstream application packaging tools, we can learn in a deep way the (real) advantages they have, and become more able to reason about how to unify the various engineering efforts going into them – and perhaps even eventually satisfy them using dpkg/rpm on our machines.

Debianising with bzr-builddeb

Bzr build-deb is very nice, but it can be very tricky to get started. I recently did a fresh debianisation of a project that is in bzr upstream, and I thought I’d record the recipe to make it work (at least until the various bugs making it hard re fixed).

Assuming that the upstream uses bzr, it goes like this:

  1. Start with a branch that is close to the code you want to Debianise. E.g. if the release was off trunk, 3 commits back: bzr branch trunk -r -3 debian
  2. Debianise as normal: put the tarball with the right name in the parent dir,  add a debian directory and fiddle until you build a package you’re happy with. Don’t commit while doing this.
  3. Build a source package- debuild -S, or bzr builddeb -S
  4. Revert your changes – bzr revert.
  5. Import the dsc – bzr import-dsc ../*.dsc
  6. Now, you may find that some dot files, such as .bzrignore have been discarded inappropriately (there is a bug open on this). If that happened, keep going. Otherwise, you’re done: you can now use merge-upstream on future upstream releases, and debcommit etc.
  7. bzr uncommit
  8. bzr revert .bzrignore (and any other files that you want to get back)
  9. debcommit
  10. All done, see point  6 for details.


Why upstreams should do distribution packaging

Software comes in many shapes and styles. One of the problems the author of software faces is distributing it to their users.

As distributors we should not discourage upstreams that wish to generate binary packages themselves, rather we should cooperate with them, and ideally they will end up maintaining their stable release packages in our distributions. Currently the Debian and Ubuntu communities have a tendancy to actively discourage this by objecting when an upstream software author includes a debian/ directory in their shipped code.  I don’t know if Redhat or Suse have similar concerns, but for the dpkg toolchain, the presence of an upstream debian directory can cause toolchain issues.

In this blog post, I hope to make a case that we should consider the toolchain issues bugs rather than just-the-way-it-is, or even features.

To start at the beginning, consider the difficulty of installing software: the harder it is to install a piece of software, the more important having it has to be for a user to jump through hoops to install it.

Thus projects which care about users will make it easy to install – and there is a spectrum of ease. At one end,

checkout from version control, install various build dependencies like autoconf, gcc and so on

through to

download and run this installer

Now, where some software authors get lucky, is when someone else makes it easy to install their software, they make binary packages, so that users can simply do

apt-get install product

Now some platforms like MacOSX and Microsoft Windows really do need an installer, but in the Unix world we generally have packaging systems that can track interdependencies between libraries, download needed dependencies automatically, perform uninstalls and so on. Binary packaging in a Linux distribution has numerous benefits including better management of security updates (because a binary package can sensibly use shared libraries that are not part of the LSB).

So given the above, its no surprise to me to see the following sort of discussion on #ubuntu-motu:

  1. upstream> Hi, I want to package product.
  2. developer> Hi, you should start by reading the packaging guide
  3. (upstream is understandably daunted – the packaging guide is a substantial amount of information, but only a small fraction is needed to package any one product.)

or (less usefully)

  1. upstream> Hi, I want to package product.
  2. developer> If you want to contribute, you should start with existing bugs
  3. upstream> But I want to package product.

Another conversation, which I think is very closely related is

  1. developer> Argh, product has a debian dir, why do they do this to me?!

The reasons for this should be pretty obvious at this point:

  • Folk want to make their product easy to install and are not themselves DD’s, DM’s or MOTU’s.
  • So they package it privately – such as in a PPA, or their own archive.
  • When they package it, they naturally put the packaging rules in their source tree.

Now, why should we encourage this, rather than ask the upstream to delete their debian directory?

Because it lets us, distributors, share the packaging effort with the upstream.

Upstreams that are making packages will likely be doing this for betas, or even daily builds. As such they will find issues related to new binaries, libraries and so on well in advance of their actual release. And if we are building on their efforts, rather than discarding them, we can spend less time repeating what they did and more packaging other things.

We can also encourage the upstream to become a maintainer in the distro and do their own uploads: many upstreams will come to this on their own, but by working with them as they take their early steps we can make this more likely and an easier path.