Learning is hard

I feel like I’m taking a big personal risk writing this, even though I know the internet is large and probably no-one will read this :-).

So, dear reader, please be gentle.

As we grow – as people, as developers, as professionals – some lessons are are hard to learn (e.g. you have to keep trying and trying to learn the task), and some are hard to experience (they might still be hard to learn, but just being there is hard itself…) I want to talk about a particular lesson I started learning in late 2008/early 2009 – while I was at Canonical – sadly one of those that was hard to experience.

At the time I was one of the core developers on Bazaar, and I was feeling pretty happy about our progress, how bzr was developing, features, community etc. There was a bunch of pressure on to succeed in the marketplace, but that was ok, challenges bring out the stubborn in me :). There was one glitch though – we’d been having a bunch of contentious code reviews, and my manager (Martin Pool) was chatting to me about them.

I was – as far as I could tell – doing precisely the right thing from a peer review perspective: I was safeguarding the project, preventing changes that didn’t fit properly, or that reduced key aspects- performance, usability – from landing until they were fixed.

However, the folk on the other side of the review were feeling frustrated, that nothing they could do would fix it, and generally very unhappy. Reviews and design discussions would grind to a halt, and they felt I was the cause. [They were right].

And here was the thing – I simply couldn’t understand the issue. I was doing my job; I wasn’t angry at the people submitting code; I wasn’t hostile; I wasn’t attacking them (but I was being shall we say frank about the work being submitted). I remember saying to Martin one day ‘look, I just don’t get it – can you show me what I said wrong?’ … and he couldn’t.

Canonical has a 360′ review system – every 6 months / year (it changed over time) you review your peers, subordinate(s) and manager(s), and they review you. Imagine my surprise – I was used to getting very positive reports with some constructive suggestions – when I scored low on a bunch of the inter-personal metrics in the review. Martin explained that it was the reviews thing – folk were genuinely unhappy, even as they commended me on my technical merits. Further to that, he said that I really needed to stop worrying about technical improvement and focus on this inter-personal stuff.

Two really important things happened around this time. Firstly, Steve Alexander, who was one of my managers-once-removed at the time, reached out to me and suggested I read a book – Getting out of the box – and that we might have a chat about the issue after I had read it. I did so, and we chatted. That book gave me a language and viewpoint for thinking about the problem. It didn’t solve it, but it meant that I ‘got it’, which I hadn’t before.

So then the second thing happened – we had a company all hands and I got to chat with Claire Davis (head of HR at Canonical at the time) about what was going on. To this day the sheer embarrassment I felt when she told me that the broad perception of me amongst other teams managers was – and I paraphrase a longer, more nuance conversation here – “technically fantastic but very scary to have on the team – will disrupt and cause trouble”.

So, at this point about 6 months had passed, I knew what I wanted – I wanted folk to want to work with me, to find my presence beneficial and positive on both technical and team aspects. I already knew then that what I seek is technical challenges: I crave novelty, new challenges, new problems. Once things become easy, it call all too easily slip into tedium. So at that time my reasoning was somewhat selfish: how was I to get challenges if no-one wanted to work with me except in extremis?

I spent the next year working on myself as much as specific projects: learning more and more about how to play well with others.

In June 2010 I got a performance review I could be proud of again – I was – in no way – perfect, but I’d made massive strides. This journey had also made huge improvements to my personal life – a lot of stress between Lynne and I had gone away. Shortly after that I was invited to apply for a new role within Canonical as Technical Architect for Launchpad – and Francis Lacoste told me that it was only due to my improved ability to play well with others that I was even considered. I literally could not have done the job 18 months before. I got the job, and I think I did pretty well – in fact I was awarded an internal ‘Spotlight on Success’ award for what we (it was a whole Launchpad team team effort) achieved while I was in that role.

So, what did I change/learn? There’s just a couple of key changes I needed to make in myself, but a) they aren’t sticky: if I get overly tired, ye old terrible Robert can leak out, and b) there’s actually a /lot/ of learnable skills in this area, much of which is derived – lots of practice and critical self review is a good thing. The main thing I learnt was that I was Selfish. Yes – capital S. For instance, in a discussion about adding working tree filter to bzr, I would focus on the impact/risk on me-and-things-I-directly-care-about: would it make my life harder, would it make bzr slower, was there anything that could go wrong. And I would spend only a little time thinking about what the proposer needed: they needed support and assistance making their idea reach the standards the bzr community had agreed on. The net effect of my behaviours was that I was a class A asshole when it came to getting proposals into a code base.

The key things I had to change were:

  1. I need to think about the needs of the person I’m speaking to *and not my own*. [Thats not to say you should ignore your needs, but you shouldn’t dwell on them: if they are critical, your brain will prompt you].
  2. There’s always a reason people do things: if it doesn’t make sense, ask them!  [The crucial conversations books have some useful modelling here on how and why people do things, and on how-and-why conversations and confrontations go bad and how to fix them.]

Ok so this is all interesting and so forth, but why the blog post?

Firstly, I want to thank four folk who were particularly instrumental in helping me learn this lesson: Martin, Steve, Claire and of course my wife Lynne – I owe you all an unmeasurable debt for your support and assistance.

Secondly, I realised today that while I’ve apologised one on one to particular folk who I knew I’d made life hard for, I’d never really made a widespread apology. So here it is: I spent many years as an ass, and while I didn’t mean to be one, intent doesn’t actually count here – actions do. I’m sorry for making your life hell in the past, and I hope I’m doing better now.

Lastly, if I’m an ass to you now, I’m sorry, I’m probably regressing to old habits because I’m too tired – something I try to avoid, but it’s not always possible. Please tell me, and I will go get some sleep then come and apologise to you, and try to do better in future.


bzr selftest uses testtools

the emperor has new clothes

bzr has just changed the base class for its test suite from ‘unittest.TestCase’ to ‘testtools.TestCase’. This change has cleaned up a bunch of test logic, deleted a significant amount of code (much of which was redundant with Python unittest) and added some useful and important features.

bzr  has only been able to make this change due to testtools expanding its mission from a simple ‘aggregation of proven unittest extensions’ into one where new extensions that *make unittest more extensible*. My deepest thanks to Jonathan for permitting me to use testtools as the vehicle to put these extension-enabling-extensions (and for his patience in reviewing said changes!).

The change was pretty easy: The bulk of the changes were in bzrlib.tests and bzrlib.tests.test_selftest. I chose to cleanup an ugly API at the same time which added a little scattershot across a number of tests. And there are more changes that can be done to take better advantage of testtools – the amount of deleted and cleaned up code isn’t complete. Even so, its a pretty clear win:

18 files changed, 228 insertions(+), 496 deletions(-)

What went?

bzr had an implementation of TestCase.run. This function is the main workhorse of Python’s unittest module, and yet sadly it has to be replaced to change the exceptions that can be raised(to signal new outcomes), or to improve on test cleanup. Testtools provides an API to permit registering new exception types and handlers for them. Like python 2.7 testtools also provides the TestCase.addCleanup API, and these two things combined mean that bzr no longer needs to reimplement the run method.

For expected failures, bzr uses a helper method TestCase.expectFailure to perform an existing assertion and convert the test into an expected failure if that assertion does not trigger. This was another feature testtools already provides and thus got deleted.

All the custom code for skipping and expected failures got deleted, and the other outcomes bzr uses turned into extensions (as per the run discussion above).

In bzr test cases generate a log (because bzr generates a log) and previously the TestResult in bzrlib inspected each test that had been executed to extract the log. This was made simpler by using the details API that testtools provides (see testtools.TestCase.addDetail), which permits tests to add arbitrary data in a semi-structured fashion. This is supported by subunit and a long standing bug with bzr selftest --parallel was fixed as a result – logs from tests run in other processes are now carried across the process barrier intact and are presented cleanly.

Some other minor cleanups are in unittest compatibility code, where bzr would degrade gracefully with unittest runners, and testtools provides such logic comprehensively, so all that got deleted too.

Whats new?

I think the most significant new facility that testtools offers bzrlib is assertThat. This assertion is inspired by the very nice assertThat in JUnit (which has changed substantially since Python’s unittest was written based on it). This assertion separates the two concerns of ‘raise an exception’ and ‘decide if an exception should be raised’. The separation allows for better reuse of custom checking code, because it permits composition in a cleaner way than extra assertion methods permit. Testtools does not include many matchers as yet, but matchers are easy to write, and if one were to write a small adapter to the hamcrest library, there are a bunch of ready made matchers there (though they have a very Java feel – such as is not meaning is – which is why Testtools did not use that library).

Secondly, the addDetail API referenced above, in combination with testtools.TestCase.addOnException will permit capturing the entire working area when a test fails, something that developers currently have to fiddle about with breakpoints to achieve. This hasn’t been done, but is a straight forward patch I hope to do in the new year.

Lastly, Testtools offers testtools.TestCase.getUniqueInteger and testtools.TestCase.getUniqueString, which are not as yet used in bzr tests, but we may start using them soon.

Beyond that, the other features of testtools are already present in bzrlib, and we simply need to find and delete more duplicated code.