Is OpenStack’s mission broken?

tl;dr:

  1. Betteridge’s law applies.
  2. Ease of development is self inflicted and not mission creep.
  3. Ease of use is self inflicted and not mission creep.
  4. Ease of operations is self inflicted and not mission creep.
  5. I have concrete suggestions for 2/3/4 but to avoid writing a whole book I’m just going to tackle (2) today.

Warning: this is a little ranty. Its not aimed at any individual, its just crystalised out after a couple of years focused on different things, and was seeded by Jay Pipes when he recently put a strawman up about two related discussions that we haven’t really had as a community:

  1. What should the scope of OpenStack’s mission be?
  2. a technical proposal for ‘mulligan’, a narrowly defined new mission. And yes, I know that OpenStack has incredible velocity. Just imagine what it could be if the issues I describe didn’t exist.

So is it the mission?

I think OpenStack has lots of “issues”, to use the technical term, across, well, everything. I don’t think the mission is even slightly related to the problems though.

The mission has ultimately just brought a huge number of folk together with the idea that they might produce a thing that can act like a cloud.

This has been done before: organisations like AWS, Microsoft, Google and smaller players like Digital Ocean and Rackspace (before OpenStack).

I reject the idea that having such a big, hairy, inclusive mission is a problem.

We can be more rigorous about that though: if a smaller mission would structurally prevent a given issue, then it’s the mission that is the problem. Otherwise, it’s not.

I do think the mission is somewhat ridiculous, but there’s a phrase in some companies:a companies mission defines what it doesn’t do, not what it does.

And I think the current OpenStack mission does that quite well: there are two basic filters that can be applied, and unless at least one matches, it’s out of scope for OpenStack.

  • Can you get $thing from a Public Cloud?
  • Do you uniquely need $thing to run a Cloud?

And yes, there are a billion things in the grey cloud around the edge.

Know what else has this problem? Linux. Well over ~3/5th of its code is in that grey edge. 170M of core, 130M of architectures, 530M in drivers. X86 + arm is 50M of that 130M of architectures.

Linux’s response has been dramatically different to ours though. They have a single conceptual project being built, with enormous configurability in how it’s deployed. We’ve decided that we’re building a billion different things under the same umbrella, and that comes down to a cultural norm.

Cultural norms and silos

Concretely, Swift and Nova: the two original projects, have never conceptually regarded themselves as one project.

Should they?

I honestly don’t know :). But by not being one project (with enormous configurability in now it’s deployed), we set a cultural expectation in OpenStack, that variation in workload implied a new project and new codebase.

Every split out takes years to accomplish – both the literal ones like Glance, and the moral ones like Neutron.

The lines for the split-outs are drawn inconsistently.

To illustrate this, ask yourself: what manages a node in an OpenStack cloud? What’s the component that is responsible for working with the machines actual resources, reporting usages, reporting back to service discovery, healthchecks, liveness etc?

In a clean slate architecture you might design a single agent, and then make it extensible/modular. OpenStack has many separate agents, one per siloed team.

Similarly the scheduling problem for net/disk/compute: there is an enormous vertical stack of cloud-APIs that can be built on a solid base, many of which OpenStack has in its portfolio. But that stack is not being built on a common scheduler – and can’t be because the cultural norm is to split things out, not to actually figure out how to maintain things more effectively without moving the code around.

Some things really are better off as separate projects – and I’m not talking monorepo vs repo-per-project, thats really only about the ability to do some changes atomically. A reusable library like oslo.config is only reusable by being a separate project. oslo.db though, exists solely because we have many separate projects that all look like ‘REST on one side database on the other’. That is a concrete problem: high deployment overheads, redundant information in some places, inappropriate transaction boundaries in others. The objects work – passing structured objects around and centralising the DB access – makes things a lot better, but its broken into vertical silos much too early.

Our domain specific services include huge amounts of generic, common problem space code: persistence, placement, access control…

Cultural norms and agility

Back in the dawn of OpenStack, there were some very very strong personalities. Codebases got totally overhauled and replaced without code review. Distrust got baked in as another cultural norm. Code review became a control point. It’s extraordinarily common to spend weeks or months getting patches through.

In some of the most effective teams I’ve worked in code review is optional. Trust and iterate is the norm there: bypassing code review is a thing that needs to be justified, but code review is not how quality is delivered. Quality is delivered by continual improvement, rather than by the quality of any one individual commit.

A related thing is being super risk averse around what lands in master (more on that below). Some very very very clever folk have written very clever code to facilitate this combination of siloed projects + trying super hard not to let regressions into master. This is very hard to deliver – and in fact we stepped back from being an absolute-approach there, about 4 years ago, to a model where we try very hard to prevent it just within a small set of connected projects.

OpenStack has a deeply split personality. Many folk want to build a downloadable cloud construction kit (e.g. Ubuntu). Many more want to build a downloadable cloud product (direct release users). And many wanted (are there still public clouds running master?) to be able to use master directly with confidence. This last use case is a major driver for wanting master to be regression free…

Agility requires the ability to react to new information in a short timeframe. Doing CD (continuous deployment) requires a pipeline that starts with code review and ends with deployed code. OpenStack doesn’t do that. There’s a huge discontinuity between upstream and actual deployments, and effectively none of developers of any part of OpenStack upstream are doing operations day to day. Those that do – at Rackspace, previously at HP (where I was working when I was full time on OpenStack), and I’m going to presume at OVH and other public clouds – are having to separate out their operations work from their upstream changes.

Every initiative in a project will miss some details that have to be figured out later – thats the nature of all but the most exactly software development processes, and those processes are hugely expensive. (Formal methods just to start with). OpenStack copes with that by running huge planning cycles – 3-6 months apart.

Commits-as-control-points + long planning cycles + many developers not operating what they build => reaction to new information happens at a glacial scale.

To illustrate this, consider request tracing. 8 years ago Google released the Dapper whitepaper, Twitter wrote Zipkin and open sourced it, and we’re now at the point where distributed tracing is de rigeur – it’s one of the standard things a service operator will expect for any system. We spent years dealing with pushback from developers in service teams that didn’t understand the benefits of the proposed analogous system for OpenStack. Rackspace wrote their own and patched it in as part of their productionisation of master. Then we also got to have a debate about whether OpenStack should have one such system, or a plugin interface to allow Rackspace to not change. [Sidebar: Rackers, I love you and :heart: your company, but that drove me up the wall! I wish we’d managed to just join forces and get everyone to at least bring a damn tracing interface in for everything].

Test reliability

With TripleO we had the idea that we’d run a cloud based on master, provide feedback on what didn’t work, and create a virtuous circle. I think that that was ultimately flawed because the existing silos (e.g. of Nova, or Glance) were not extended into owning those components within TripleO: TripleO was just another deployer, rather than part of the core feedback cycle.

More generally, we had a team of people (TripleO) running other people’s code (all of OpenStack and commit rights were hard to get in other projects) with no SLA around that code.

I didn’t think of this that way at the time, for all that we understood that that was what we are doing, but that structure is actually structurally fragile: it’s the very antithesis of agile. When something broke it could stay broken for weeks, simply because the folk responsible for the break are not accountable for the non-brokenness of the system. (I’m not whinging about the teams we worked with – people did care, but caring and being accountable are fundamentally different things).

There is another place with that pattern: devstack. Devstack is a code base that exists to deploy all the other openstack components. It’s the purest essence of ‘run other people’s code with no SLA’, and devstack is the engine for pre-merge testing and pre-review testing in OpenStack.

I now believe that to be a key problem for OpenStack. Monty loves to talk about how many clouds OpenStack deploys daily in testing. Every one of those tests is running some number of components (typically the dependency graph for the service under test) which have not changed and are not written by the author, from scratch. And then of course the actual service being tested.

Thats structurally fragile: it’s running 5 or 10 times as much code as is relevant to the test being conducted. And the people able to fix any problems in those dependencies don’t feel the friction at the same time, in the same way, as their users do. (This isn’t a critique of the people, it’s just maths).

I’ll probably write more about this in detail later, as it ties into a larger discussion about testing and deployment of microservices, or testing in production. But imagine if we got rid of devstack for review and merge testing. It has several other use cases of course – ‘give me an OpenStack to hack on’ is an important, discrete test case, and folk probably care that that works. For simplicity I’m going to ignore that for now.

So, if we don’t use devstack, how do we deploy a cloud for pre-merge testing.

We don’t. We don’t need to. What we need to do is deploy the changed code into a cloud whose other components are expected to be compatible with that code. Devstack did this by taking a given branch of a bunch of components and bringing them up from scratch. Instead, we run a production grade, monitored and alerted deployment of all the components. Possibly we run many such deployments, for configurations that cannot coexist (e.g. different federation modes in keystone?). The people answering the pages for those alerts could be the service developers, or it could be an operations team with escalation back to the developers as-needed (to filter noise like ‘oh, cloud $X has just had an outage’). But ultimately the developers would be directly accountable in some realtime fashion.

Then the test workflow becomes:

  1. Build the code under test. (e.g. clean VM, pip install, whatever)
  2. Deploy that code into the existing cluster as a new shard
  3. Exercise it as desired
  4. Tear it down

Let’s use nova-compute as an example.

  1. pip install
  2. Run nova-compute reporting to an existing API server with some custom label on the hypervisor to allow targeting workloads to it
  3. Deploy a VM targeted it
  4. tear it down

I’m sure this raises lots of omg-we-can’t-do-that-because-technical-reason-X-about-what-we-do-today.

That’s fine, but for the purposes of this discussion, consider the destination – not the path.

If we did this:

  • Individual test runs could use substantially less resources
  • And perform substantially less work
  • Which implies better performance
  • Failures due to other components than the service under test would be a thing of the past (when you’re on the hook for your service running reliably, you engineer it to do that)

I think this post is long enough, so let me recap briefly. If there is interest out there I can drill into what sort of changes would be needed to transition to such a system, the suggestions I have for ease of use and ease of operations, and I think I’m also ready to provide some discussion about what the architecture of OpenStack should be.

Recap: why is development hard

Cultural problem #1: silos rather than collaboration in place. Moving the code rather than working with others.

Cultural problem #2: excessive entry controls. Make each commit right rather than trend up wards with a low-latency high change rate.

Cultural problem #3: developer feedback cycle is measured in weeks (optimistically), or years (realistically).

Technical problem #1: excessive code executed in tests: 80% of test activity is not testing the code under test.

Technical problem #2: our testing is optimised for new-cloud deployments: as our userbase grows upgrades become the common use case and testing should match that.

Advertisements

Money doesn’t matter

Well, obviously it does. But the whole ‘government cannot pay for healthcare’, or land, or education : thats nonsense.

And any politician that claims that is either ignorant, or has an agenda that involves deliberate repression of the population.

These are strong claims, so let me break it down. Also, I’m not an economist, if I’ve gotten the wrong end of the stick economics-wise, I’ll happily update this or at least add errata to it…

Money isn’t wealth. Its a thing you can exchange for other things, but it itself is not wealth. Easy example: when countries have had runaway inflation, and the price of e.g. potatoes has been going up 100% a day, it doesn’t matter how much money you have, you will eventually be unable to buy potatoes. But a potato farmer with 10’s of thousands of potatoes won’t run out and go hungry.

We use money to scale our society. Without money, we have some problems. Firstly, if I want something you have, but I don’t have anything you want, I have to find someone who wants something I have, and something you want that they don’t want, and then do that trade, then come back to you to trade the thing you wanted for what I wanted. This quickly becomes a bottleneck on actually getting stuff done. Secondly, once someone, say a potato farmer :), has what they want right now, they will be very hard to trade with : if they trade potatoes for things they don’t want, they are gambling that other folk will want them in the future. That requires everyone to become a good gambler on the future value of things.

But just like money isn’t wealth, money also isn’t work. We work to exchange our time for wealth; except money isn’t wealth, so really we’re exchanging our time for this thing we can exchange for the actual things we want. Government *literally* create money anytime they want, and they destroy it at will too. If there’s too much money floating around, then (at whatever prices folk are used to) everything will be purchasable, and its very likely folk selling stuff will run out and raise prices as a result. Then it becomes harder to buy stuff, although everyone that recieved those raised prices has more money to buy with, so this continues for a while : this is inflation.

Too little money, and things that could be sold won’t sell, because there isn’t enough money at the prices folk are used to, and the folk selling don’t want to “lose money” (which is odd, because money is a promise not a thing, so if you’re in a deflationary situation, selling *right now* may well be better than holding on and selling later :)), so they will be slow to lower prices, will recieve less either way, and just like with increased prices, the decrease gets spread amongst the participants – vendors, owners, employees.

But these things don’t happen instantly :- there’s slack in the system.

So what does matter? What actually matters is a combination of resources and productivity: those are the things that determine whether we, as a society, can produce enough things for our people to have what they want and need. For instance, building a house needs the following resources: land, building materials, labour, power, as well as ongoing supplies of power, water and sewage processing.

If, given the people currently in our country, and what they are being paid to do today, we have both enough resources, and enough labour-and-productivity, to house, feed, heat, transport and entertain everyone, then the failure to do so is not one of money but one of choice. That builder friend you know who doesn’t have work right now could be building a house for that other friend you’ve got whose family is sleeping in a garage. The builder that’s not working because the family in question can’t afford to pay for the land or the resources, and the builder has nowhere to do the building, nor any stuff to make the building out of.

The core choice is : do we as a society think its reasonable anyone should have to sleep rough, or miss out on school, or any of a thousand examples of poverty, when we’ve got the resources and production capability to fix it? Do we think that? Really? And what are we willing to do to fix it? Right now, a lot of the production capability of our society is owned by 1% of our society. So less than 1% of people are deciding what is made and how its made.

Now, there’s a bunch of curly questions like, what about the foreign account deficit? What about the fact that lots of land is already owned by someone? How do we fairly get that family the house they deserve? Won’t some people just ride on the coat-tails of others? Isn’t this going to require taking things other people have already earnt?

These are all fair questions. My answers to those are:

  • If everyone had their needs met we’d have many more people contributing to creative things we can sell to foreign countries, more than enough to address any changes in the foreign account deficit from sorting things out here.
  • Our current system has huge wealth inequality; it doesn’t matter whether that inequality is in the form of money, or ownership of things, either we leave that 1% controlling 99%, or we redistribute things in some equitable ongoing basis. Wealth taxes, CGTs, estate taxes. Lots of options.
  • I’m not sure. I think ultimately it means capping the maximum wealth ratio between our richest and poorest people. e.g. the more wealth you have the more you’re taxed until eventually – at say 500K / year (gross) wealth growth, your marginal tax rate becomes 90%, and at some higher figure, say 1M/year (gross) wealth growth your marginal tax rate exceeds 95%. That way wealthy folk get to choose what things they keep : there’s no central planning department or other bureaucracy involved.
  • Folk already ride on the coat tails of other people. But its nowhere near as simple as ‘those dole bludgers’. Folk on the pension don’t work. Folk with ‘passive income’ (read investments whose growth is high enough those folk don’t need to work). School kids. And yes, folk on the dole. For some folk on the dole, the marginal tax rate already exceeds 100% – there are some steps in our tax system that make part time work while receiving the dole very very hard. Home makers are also something we support as a society. though less directly. But lets assume fully 10% of the country simply don’t want to work. Consider this in productivity terms. We get 10% less things done. Big deal. We’ve enough resources and people to deliver those essentials: food, shelter, power, education, with waaay less than 90% of our workforce. And as automation inproves expect that 90% to drop down towards 10%. At that point we’d want 90% of folk not working, I suspect.
  • Yes, folk will have to get taxed on what they have not just on what they are gaining. This makes sense though: we want the system to slowly drive equity for everyone. (Not equality, and not sameness, just equity). Taxing what you have is actually a lot fairer than taxing what you earn. Because if you have nothing, but start earning a lot, you’re starting way behind everyone else, so not taxing you much is pretty nice. And if you have a lot, but aren’t earning anymore, not taxing you is really just giving you a free pass: supporting you in terms of every single shared resource and infrastructure.