Learning is hard

I feel like I’m taking a big personal risk writing this, even though I know the internet is large and probably no-one will read this :-).

So, dear reader, please be gentle.

As we grow – as people, as developers, as professionals – some lessons are are hard to learn (e.g. you have to keep trying and trying to learn the task), and some are hard to experience (they might still be hard to learn, but just being there is hard itself…) I want to talk about a particular lesson I started learning in late 2008/early 2009 – while I was at Canonical – sadly one of those that was hard to experience.

At the time I was one of the core developers on Bazaar, and I was feeling pretty happy about our progress, how bzr was developing, features, community etc. There was a bunch of pressure on to succeed in the marketplace, but that was ok, challenges bring out the stubborn in me :). There was one glitch though – we’d been having a bunch of contentious code reviews, and my manager (Martin Pool) was chatting to me about them.

I was – as far as I could tell – doing precisely the right thing from a peer review perspective: I was safeguarding the project, preventing changes that didn’t fit properly, or that reduced key aspects- performance, usability – from landing until they were fixed.

However, the folk on the other side of the review were feeling frustrated, that nothing they could do would fix it, and generally very unhappy. Reviews and design discussions would grind to a halt, and they felt I was the cause. [They were right].

And here was the thing – I simply couldn’t understand the issue. I was doing my job; I wasn’t angry at the people submitting code; I wasn’t hostile; I wasn’t attacking them (but I was being shall we say frank about the work being submitted). I remember saying to Martin one day ‘look, I just don’t get it – can you show me what I said wrong?’ … and he couldn’t.

Canonical has a 360′ review system – every 6 months / year (it changed over time) you review your peers, subordinate(s) and manager(s), and they review you. Imagine my surprise – I was used to getting very positive reports with some constructive suggestions – when I scored low on a bunch of the inter-personal metrics in the review. Martin explained that it was the reviews thing – folk were genuinely unhappy, even as they commended me on my technical merits. Further to that, he said that I really needed to stop worrying about technical improvement and focus on this inter-personal stuff.

Two really important things happened around this time. Firstly, Steve Alexander, who was one of my managers-once-removed at the time, reached out to me and suggested I read a book – Getting out of the box – and that we might have a chat about the issue after I had read it. I did so, and we chatted. That book gave me a language and viewpoint for thinking about the problem. It didn’t solve it, but it meant that I ‘got it’, which I hadn’t before.

So then the second thing happened – we had a company all hands and I got to chat with Claire Davis (head of HR at Canonical at the time) about what was going on. To this day the sheer embarrassment I felt when she told me that the broad perception of me amongst other teams managers was – and I paraphrase a longer, more nuance conversation here – “technically fantastic but very scary to have on the team – will disrupt and cause trouble”.

So, at this point about 6 months had passed, I knew what I wanted – I wanted folk to want to work with me, to find my presence beneficial and positive on both technical and team aspects. I already knew then that what I seek is technical challenges: I crave novelty, new challenges, new problems. Once things become easy, it call all too easily slip into tedium. So at that time my reasoning was somewhat selfish: how was I to get challenges if no-one wanted to work with me except in extremis?

I spent the next year working on myself as much as specific projects: learning more and more about how to play well with others.

In June 2010 I got a performance review I could be proud of again – I was – in no way – perfect, but I’d made massive strides. This journey had also made huge improvements to my personal life – a lot of stress between Lynne and I had gone away. Shortly after that I was invited to apply for a new role within Canonical as Technical Architect for Launchpad – and Francis Lacoste told me that it was only due to my improved ability to play well with others that I was even considered. I literally could not have done the job 18 months before. I got the job, and I think I did pretty well – in fact I was awarded an internal ‘Spotlight on Success’ award for what we (it was a whole Launchpad team team effort) achieved while I was in that role.

So, what did I change/learn? There’s just a couple of key changes I needed to make in myself, but a) they aren’t sticky: if I get overly tired, ye old terrible Robert can leak out, and b) there’s actually a /lot/ of learnable skills in this area, much of which is derived – lots of practice and critical self review is a good thing. The main thing I learnt was that I was Selfish. Yes – capital S. For instance, in a discussion about adding working tree filter to bzr, I would focus on the impact/risk on me-and-things-I-directly-care-about: would it make my life harder, would it make bzr slower, was there anything that could go wrong. And I would spend only a little time thinking about what the proposer needed: they needed support and assistance making their idea reach the standards the bzr community had agreed on. The net effect of my behaviours was that I was a class A asshole when it came to getting proposals into a code base.

The key things I had to change were:

  1. I need to think about the needs of the person I’m speaking to *and not my own*. [Thats not to say you should ignore your needs, but you shouldn’t dwell on them: if they are critical, your brain will prompt you].
  2. There’s always a reason people do things: if it doesn’t make sense, ask them!  [The crucial conversations books have some useful modelling here on how and why people do things, and on how-and-why conversations and confrontations go bad and how to fix them.]

Ok so this is all interesting and so forth, but why the blog post?

Firstly, I want to thank four folk who were particularly instrumental in helping me learn this lesson: Martin, Steve, Claire and of course my wife Lynne – I owe you all an unmeasurable debt for your support and assistance.

Secondly, I realised today that while I’ve apologised one on one to particular folk who I knew I’d made life hard for, I’d never really made a widespread apology. So here it is: I spent many years as an ass, and while I didn’t mean to be one, intent doesn’t actually count here – actions do. I’m sorry for making your life hell in the past, and I hope I’m doing better now.

Lastly, if I’m an ass to you now, I’m sorry, I’m probably regressing to old habits because I’m too tired – something I try to avoid, but it’s not always possible. Please tell me, and I will go get some sleep then come and apologise to you, and try to do better in future.

Less-assily-yrs,
Rob

Launchpadlib without gnome-keyring

Recently I’ve been doing my personal development SSH’d into my personal laptop. I found that launchpadlib (which various projects use for release automation) was failing – the gnome keyring API threw an error because the keyring was locked, and python-keyring didn’t try to unlock it.

I needed a workaround to be able to release stuff, and with a bit of digging and help from #launchpad, came up with this:

mkdir ~/.cache/keyring
mkdir ~/.local/share/python_keyring
echo > ~/.local/share/python_keyring/keyringrc.cfg << EOF
[backend]
default-keyring=keyring.backend.UncryptedFileKeyring
keyring-path=/home/robertc/.cache/keyring/
EOF

(There is already encryption in place, so I chose an uncrypted store – read the keyring source to find other alternatives).

With this done, I can now use lp-shell etc over SSH, for when I’m not physically at my machine.

minimising downtime for schema changes with postgresql

Edits: Corrected the description of the slony bug, and noted that there is a typo on the lazr_postgresql PYPI page.

Two years ago Launchpad did schema changes once a month. Everyone would cross their fingers and hope while the system administrators took all the application servers offline, patched the database with a months worth of work and brought up the servers again running the new QA’d codebase.

This had two problems:

  1. due to the complexity of the system – something like 300 processes have to be stopped or inhibited to take everything offline – the downtime duration was often about 90 minutes long irrespective of the schema patch duration. [Some of the processes don’t like being interrupted at all].
  2. We simply could not deliver any change in less than 1 week, with the on average latency for something that jumped all the queues still being 2 weeks.

About a year ago we wanted to increase the rate at which schema changes could be carried out – the efforts to speed Launchpad up had consumed most low hanging fruit and more and more schema patches were required. We didn’t want to introduce additional 90 minute downtime windows though. Adopting incremental migrations – the sort of change process described in various places on the internet – seemed like a good way to make it possible to apply the schema changes without this slow shutdown-and-restart step, which was required because the pre-patch codebase couldn’t speak to the new schema. We could optimise each patch to be very fast by avoiding anything that causes a full table scan or table rewrite (such as adding indices, adding columns with a non-NULL default value). That would let us avoid the 90 minutes of downtime caused by stopping and restarting everything. However, that wasn’t sufficient – the reason Launchpad ended up doing monthly downtime is that previous attempts to do more frequent schema changes had too high a failure rate. A key reason for patch deployment time blowing out when everything wasn’t shut down was due to  Launchpad being a very busy system – with the use of Slony, schema changes require an exclusive lock on all tables. [More recent versions of Slony only lock some tables, but it still requires very widespread locks for most DDL operations]. We’re doing nearly 10 thousand transactions per minute, at any point in time there are always locks open on some table in the system: it was highly improbably and effectively impossible for slonik to get an exclusive lock on all tables in a reasonable timeframe. Background tasks that take many minutes to complete exacerbate this – we can’t just block new transactions long enough to deliver all the in-flight web pages and let locks clear that way.

PGBouncer turns out to be an ideal tool here. If you route all your connections through PGBouncer, you have a single point you can deliberately interrupt to clear all database locks in a second or so (it takes time for backends to all notice that their clients have gone).

So we combined these things to get what we called ‘Fast Down Time’ or FDT.  We set the following rules for developers:

  1. Any schema patch had to complete in <= 15 seconds in our schema staging environment (which has a full copy of the production DB), or we’d roll it back and redesign.
  2. Any patch could change either code or schema, never both. schema patches were to land on a separate branch and would be promoted to trunk only after deployment. That branch also receives automated merges from trunk after every commit to trunk, so its running the latest code.

This meant that we could be confident in QA: we would QA the new schema and the application process with the current live code (we deploy trunk multiple times a day). We published some documentation about how to write fast schema patches to help socialise the approach.

Then we wrote an automated tool that would:

  1. Check for known fragile processes and abort if any were found.
  2. Check for very long transactions and abort if any were found.
  3. Shutdown pgbouncer, disconnecting all clients instantly.
  4. Use slonik to apply one or more schema patches.
  5. Start pgbouncer back up again.

The code for this (call it FDTv1) is in the Launchpad source code history – its pretty entangled but its there for grabbing if you need it. Read on to see why its only available in the history :)

The result was wonderful – we immediately were able to deploy schema changes with <= 90 seconds of downtime, which was significantly less than the 5 minutes our stakeholders had agreed to as a benchmark – if we were under 5 minutes, we could schedule downtime once a day rather than once a month. We had to fix some API client code to retry more reliably, and likewise fix a few minor bugs in the database connection handling logic in the appservers, but all in all it was a pretty smooth project. Along the way we spun off a small python helper to run and control pgbouncer, which let us write effective tests for the connection handling code paths. In

This gave us the following workflow for making schema changes:

  1. Land and deploy an incremental schema change.
  2. Land and deploy any indices that need to be added – these are deployed live using CREATE INDEX CONCURRENTLY.
  3. Land and deploy code changes to populate any additional fields/tables from both application servers, and from cron – we do a bulk backfill that does many small transactions while walking over the entire dataset that needs to be updated / populated.
  4. Land and deploy code changes to drop references to the old schema, whatever it was.
  5. Land and deploy an incremental schema change to finalise the change – such as making a new column NOT NULL once the backfill is complete.

This looks long and unwieldy but its worth noting that its actually just repeated applications of a smaller primitive:

  1. Make a schema change that is fast and compatible with existing code.
  2. Change code to take advantage of the changed schema

Pretty much any change that is desired can be done using this single primitive.

We wanted to go further though – the multiple stages required for complex migrations became a burden with one change a day. Fortunately PostgreSQL now includes its own replication engine, which replicates the WAL logs rather than installing triggers on all tables like Slony.

Stuart, our intrepid DBA migrated Launchpad to PostreSQL 9.1, updated the FDT tool to work with native replication, and migrated Launchpad off of Slony. The result is again wonderful – the overhead in doing a schema patch, with all the protection I described above, is now ~5 seconds. We can do incremental changes in less time than it takes your browser to figure out that a given server is offline. We’re now negotiating with the Launchpad stakeholders to get multiple downtime windows each day, with this almost unnoticable, super reliable process in place.

Reliability wise, FDT has been superb. We’ve had 2 failures: one where we believe we encountered a bug in Slony: We dropped the id column from two tables in one patch (we replaced the autoincrement column as PK with a naturally unique column), and one where we landed a patch that worked on staging but led to lock contention in production – so the patch applied, but the system was very unhealthy after that until we fixed it. Thats after doing approximately 60 patches over a 1 year period.

We’re partway through extracting the patching logic from Launchpad’s code base into a reusable tool, but the basic principles will apply to any PostgreSQL environment. Note that there is a typo on the PYPI page – the actual Launchpad project is at https://launchpad.net/lazr.postgresql.

Public service announcement: signals implies reentrant code even in Python

This is a tiny PSA prompted by my digging into a deadlock condition in the Launchpad application servers.

We were observing a small number of servers stopping cold when we did log rotation, with no particularly rhyme or reason.

tl;dr: do not call any non-reentrant code from a Python signal handler. This includes the signal handler itself, queueing tools, multiprocessing, anything with locks (including RLock).

Tracking this down I found we were using an RLock from within the signal handler (via a library…) – so I filed a bug upstream: http://bugs.python.org/issue13697

Some quick background: when a signal is received by Python, the VM sets a status flag saying that signal X has been received and returns. The next chance that thread 0 gets to run bytecode, (and its always thread 0) the signal handler in Python itself runs. For builtin handlers this is pretty safe – e.g. for SIGINT a KeyboardInterrupt is raised. For custom signal handlers, the current frame is pushed and a new stack frame created, which is used to execute the signal handler.

Now this means that the previous frame has been interrupted without regard for your code: it might be part way through evaluating a multi-condition if statement, or between receiving the result of a function and storing it in a variable. Its just suspended.

If the code you call somehow ends up calling that suspended function (or other methods on the same object, or variations on this theme), there is no guarantee about the state of the object; it becomes very hard to reason about.

Consider, for instance, a writelines() call, which you might think is safe. If the internal implementation is ‘for line in lines: foo.write(line)’, then a signal handler which also calls writelines, could have what it outputs appear between any two of the lines in writelines.

True reentrancy is a step up from multithreading in terms of nastiness, primarily because guarding against it is very hard: a non-reentrant lock around the area needing guarding will force either a deadlock, or an exception from your reentered code; a reentrant lock around it will provide no protection. Both of these things apply because the reentering occurs within the same thread – kindof like a generator but without any control or influence on what happens.

Safe things to do are:

  • Calling code which is threadsafe and only other threads will be concurrently calling.
  • Performing ‘atomic’ (any C function is atomic as far as signal handling in Python is concerned) operations such as list.append, or ‘foo = 1’. (Note the use of a constant: anything obtained by reading is able to be subject to reentrancy races [unless you take care :)])

In Launchpad’s case, we will be setting a flag variable unconditionally from the signal handler, and the next log write that occurs will lock out other writers, consult the flag, and if needed do a rotation, resetting the flag. Writes after the rotation signal, which don’t see the new flag, would be ok. This is the only possible race, if a write to the variable isn’t seen by an in-progress or other-thread log write.

That is all.