27 Jun 2008

Dear lazyweb number 3.

So far, I’ve asked:

high latency net simulations – great answers.

python friendly back-end accessible search engines – many answers, none that fit the bill. So I wrote my own :).

Today, I shall ask – is there a python-accessible persistent b+tree(or hashtable, or …) module. Key considerations:

– scaling: millions of nodes are needed with low latency access to a nodes value and to determine a nodes absence

– indices are write once. (e.g. a group of indices are queried, and data is expired altered by some generational tactic such as combining existing indices into one larger one and discarding the old ones)

– reading and writing is suitable for sharply memory constrained environments. ideally only a few 100KB of memory are needed to write a 100K node index, or to read those same 100K nodes back out of a million node index. temporary files during writing are fine.

– backend access must either be via a well defined minimal api (e.g. ‘needs read, readv, write, rename, delete’) or customisable in python

– easy installation – if C libraries etc are needed they must be already pervasively available to windows users and Ubuntu/Suse/Redhat/*BSD systems

– ideally sorted iteration is available as well, though it could be layered on top

– fast, did I mention fast?

– stable formats – these indices may last for years unaltered after being written, so any libraries involved need to ensure that the format will be accessible for a long time. (e.g. python’s dump/marshal facility fails)

sqlite, bdb already fail at this requirements list.

snakesql, gadfly, buzhug and rbtree fail too.

14 Jun 2008

Rethinking annotate: I was recently reminded of Bonsai for querying vcs history. GNOME runs a bonsai instance. This got me thinking about ‘bzr annotate’, and more generally about the problem of figuring out code.

It seems to me that ‘bzr annotate’, is, like all annotates I’ve seen pretty poor at really understanding how things came to be – you have to annotate several versions, cross reference revision history and so on. ‘bzr gannotate’ is helpful, but still not awesome.

I wondered whether searching might be a better metaphor for getting some sort of handle on what is going on. Of course, we don’t have a fast enough search for bzr to make this plausible.

So I wrote one: bzr-search in my hobby time (my work time is entirely devoted to landing shallow-branches for bzr, which will make a huge difference to pushing new branches to hosting sites like Launchpad). bzr-search is alpha quality at the moment (though there are no bugs that I’m aware of). Its mainly missing optimisation, features and capabilities that would be useful, like meaningful phrase searching/stemming/optional case insensitivity on individual searches.

That said, I’ve tried it on some fairly big projects – like my copy of python here:

 time bzr search socket inet_pton (about 30 hits, first one up in 1 second)... real    0m2.957s user    0m2.768s sys     0m0.180s 

The index run takes some time (as you might expect, though like I noted – it hasn’t been optimised as such). Once indexed, a branch will be kept up to date automatically on push/pull/commit operations.

I realise search is a long slope to get good results on, but hey – I’m not trying to compete with Google :). I wanted something that had the following key characteristics: * Worked when offline * Simple to use * Easy to install

Which I’ve achieved – I’m extremely happy with this plugin.

Whats really cool though, is that other developers have picked it up and already integrated it into loggerhead and bzr-eclipse. I don’t have a screen shot for loggerhead yet, but heres an old one. This old one does not show the path of a hit, nor the content summaries, which current bzr-search versions create.

10 Jun 2008

Recently I read about a cool bugfix for gdb in the Novell bugtracker on planet.gnome.org. I ported the fix to the ubuntu gdb package, and Martin Pitt promptly extended it to have an amd64 fix as well.

I thought I would provide the enhanced patch back to the Novell bugtracker. This required creating new Novell login as my old CNE details are so far back I can’t remember them at all.

However, hard-stop when I saw this at the bottom of the form:

“By completing this form, I am giving Novell and/or Novell’s partners permission to contact me regarding Novell products and services.”

No thank you, I don’t want to be contacted. WTF.

09 Jun 2008

So, the last lazyweb question I asked had good results. Time to try again:

Whats a good python-accessible, cross-platform-and-trivially-installable(windows users) flexible (we have plain text, structured data, etc and a back-end storage area which is only accessible via the bzr VFS in the general case), fast (upwards of 10^6 documents ), text index system?

pylucene fails the trivially installable test (apt-cache search lucence -> no python bindings), and the bindings are reputed to be SWIG:(, xapian might be a candidate, though I have a suspicion that SWIG is there as well from the reading I have done so far, and – we’ll have to implement our own BackEndManager subclass back into python. That might be tricky – my experience with python bindings is folk tend to think of trivial consumers only, not of python providing core parts of the system :(.

So I’m hoping there is a Better Answer just lurking out there…

Updates: sphinx looks possible, but about the same as xapian – it will need a custom storage backend. google desktop is out (apart from anything else, there is no way to change the location documents are stored, nor any indication of a python api to control what is indexed).

It looks like I need to be considerably more clear :). I’m looking for something to index historical bzr content, such that indices can be reused in a broad manner(e.g. index a branch on your webserver), are specific to a branch/repository (so you don’t get hits for e.g. the working tree of a branch), with a programmatic API (so that the bzr client can manage all of this), with no requirement for a daemon (low barrier to entry/non-admin users).

04 Jun 2008

So I’ve been playing with Mnemosyne recently, using it to help brush up on my woeful Latin vocabulary. I thought it would be a good idea to get some of that data out of my head an into Ubuntu (which has a Latin translation).

Imagine my surpise when, after installing the latin language pack (through the gui), I could not log into Ubuntu in Latin?!

It turns out that there is no Latin locale in Ubuntu, or indeed in glibc. This is kind of strange (there is an esperanto locale). Remember that locales combine language and location – they describe how to format money, numbers, telephone details and so on. So clearly, I needed to add a latin locale. I could add one for just me (e.g. la_AU), or I could add a generic one (helpfully using AU values) on the betting chance that at this point there are not enough folk wishing to log in in latin (after all you can’t!) for us to need one per country. And even more so, doing la_AU doesn’t make a lot of sense – there isn’t a pt_AU locale even though there are portuguese speakers living in Australia. (The root issue here is that location and language are conflated. POSIX I hate thee). So, a quick crash course in locales, some copy and paste later, and there is a Latin locale.

Installing that on my system got me a latin locale, but gdm still wouldn’t let me select it. It turns out that gdm feels the urge to maintain its own list of what locales exist, and what to call them. I thought duplication in software was a bad idea, but perhaps I don’t understand the problem space enough. Anyhow, time to fix it.

And because this is something other people may be interested in, and the patches are not yet in Ubuntu because upstream glibc may choose a different locale code (e.g. la_AU), I’ve finally had reason to activate my ppa on launchpad, so there are now binary packages for hardy for anyone that wants to play with this!