2022-11-17 – Faster DNS name decompression

Earlier this year, I rewrote BIND’s DNS name compression algorithm. That work is in the November release, BIND 9.19.7.

Last week I rewrote the reverse algorithm, DNS name decompression, and made it about 2x faster. I enjoyed deleting this comment that dates back to January 1999:

 * Note:  The following code is not optimized for speed, but
 * rather for correctness.  Speed will be addressed in the future.

So here is a collection of folklore and tricks related to DNS name decompression.

read more ...

2022-10-12 – hg64 multithreaded histograms

Since my blog post about hg64 histograms in July I have continued occasional hacking on the code, so it’s time to post an update.

As before, my proof-of-concept histogram data structure hg64 is available from:

read more ...

2022-09-23 – How big is this integer type?

This week I am on “escalation engineer” duty, which means triaging new bug reports. One of them was found by ubsan (-fsanitize=undefined) reporting an overflow in a signed integer shift. This was fun to fix because it needed a bit of lateral thinking.


read more ...

2022-07-15 – hg64: a 64-bit histogram data structure

See 2022-10-12 for an update

I have reached the point in my main project (qp-trie for BIND) where I want to collect performance statistics. Things like:

In a wider context, those using BIND for large authoritative server systems would like better statistics on things like:

For my own purposes, I’m no longer satisfied with summarizing performance with just the mean and standard deviation; and in an operational context, it’s useful to know about the existence of outliers (rare outliers can be hidden by simple summary statistics) and the size of an outlier can be a useful clue.

So, we need histograms!


I have written a proof-of-concept histogram data structure called hg64, which you can get from:

It can load 1 million data items in about 5ms (5ns per item), and uses a few KiB of memory.

Here I will write about how hg64 came to be.

read more ...

2022-07-01 – A DNS name compression algorithm

In our previous episode, I optimized tolower() using SWAR tricks. That was in fact a side-quest to a side-quest to a side-quest. The first side-quest was trying out a new DNS name compression algorithm; in BIND, compression is performance-critical and a heavy user of tolower(), which is what set me off on the second and third side-quests.

Now let’s pop the stack, and talk about DNS names.

read more ...

2022-06-27 – tolower() in bulk at speed

Here’s a fun bit of Monday optimization.

DNS servers often have to convert DNS names to their canonical lower case form. BIND has to do so a bit more than most because it tries to preserve the case of DNS names rather than canonicalizing them.

I was recently distracted by a cleanup oportunity: there were multiple maptolower lookup tables in BIND, so I decided to make a home for them so that we would only need one copy. Then I thought, surely there are faster ways to implement tolower() than a lookup table.

read more ...

2022-06-22 – Compacting a qp-trie

My new job is working on BIND for ISC, and my main project is to replace BIND’s core red-black tree data structure with my qp-trie.


In the summer of 2021 I wrote some notes on page-based GC for qp-trie RCU which I then went on to implement in my fork of NSD.

Since the start of May 2022 I have ported the NSD version of my qp-trie to BIND, with several improvements:

The notes I wrote last summer turned into code very nicely: NSD proved to be a good place to try out the ideas. And more recently, I am pleased with how the code adapted to the more complicated demands of BIND.

But there’s one area that has been problematic: compaction.

read more ...

2022-06-04 – Choice on Units of Measurement: Markings and Sales

My response to the UK government consultation on using imperial units of measure for retail.

Regarding the foreword to the consultation description, my children were weighed in kilos at birth, so whoever wrote the foreword must be wery old, and talking about babies who are now grown adults.

And for metrological purposes, British weights and measures have been derived from the metric system since the international convention on the metre in 1875 and the production of the new metric prototypes in London in 1898, because the metric standards were made to a higher quality with greater reliability than the old imperial standards. So it’s incorrect to say that we started adopting the metric system in 1995; that was actually when we started abolishing imperial (and that was at the initiative of the UK government, not the EU).

oops, I should have mentioned that the tories were in power in 1995

Q. 1 for all,

1 a) Are there any specific areas of consumer transactions that should be a priority for allowing a choice in units of measurement, and why?

None, because there is no need to make it more confusing to compare sizes and prices.

1 b) Are there any specific areas that you think should be excluded from a choice in units of measurement, and why?

There should be a single set of standard units of measure so that it is as easy as possible to compare the sizes and prices of goods, especially (e.g.) price per kilo or price per litre.

1 c) If an item is sold in imperial measures, should there be a requirement for a metric equivalent alongside it?

Metric measures should always be required, and there should always be clear labelling showing the price per some power of 10 (e.g. 100g or 1kg)

Q. 2 for businesses,

n/a (I am replying as an individual)

Q. 3 for consumers,

3 a) If you had a choice, would you want to purchase items:

I always want to purchase items in metric (preferably a round number when measured in metric units), and I want it to be easy to find the quantity labelled in metric.

3 b) Are you more likely to shop from businesses that sell in imperial units?

I am likely to avoid shops that use imperial units, because the measurements will be confusing and unfamiliar.

3 c) Do you foresee any costs or benefits to you from businesses being permitted to sell:

I expect it will be detrimental to allow businesses to sell in imperial units only, because it will make it easier for them to confuse customers, misprice goods, and hide their malpractice.

3 d) Do you have experience of buying solely in imperial units?

I am 47 years old and I cannot remember when imperial measures were in wide use for anything except milk and beer.

Milk, for example, is often confusing when 2 litre bottles are sold alongside 4 pint bottles, and I have to carefully examine the labels to find out which is the larger volume and what is the price per litre of the 4 pint bottle.

For beer, how can I compare a pint bottle and a 33cl bottle? It’s relatively easy for 50cl vs 33cl because the ratio is 3/2. In practice I think of a pint as about half a litre, so I can compare in metric. And, when I am served beer in a pub in a pint-to-brim glass, what I usually get is closer to half a litre than a pint.

Q. 4 for trading standards,

n/a (I am replying as an individual)

2022-04-20 – really divisionless random numbers

I previously wrote about Daniel Lemire’s algorithm for nearly divisionless unbiased bounded random numbers. Recently I found out there is a way to get really divisionless random numbers, as explained by Steve Canon and demonstrated by Kendall Willets.

I have written my own version of really divisionless random numbers so I could compare it with Lemire’s algorithm. Here’s how it works.

biased numbers

As before, I’m going to use W == 1<<32 as an abbreviation for two to the power of the word size. (Except I am using 32 bit words instead of 64 bits to make the worst cases more visible in the benchmark later on.)

We have a raw random number generator rng() that produces numbers in the range [0,W), but we are going to treat it as if it produces fixed-point fractions in the range [0,1), as if it were rng() / W. Then we can get a value in our desired range [0,limit) by multiplying rng() * limit. This gives us a 32.32 fixed-point result, represented as a 64 bit number.

    uint64_t num = (uint64_t)rng() * (uint64_t)limit;
    uint32_t value = (uint32_t)(num >> 32);

This random value is biased. The bias is due to the limited precision of the fraction we started with. Lemire’s nearly-divisionless algorithm uses rejection sampling to debias its result. But what if we could calculate it with unlimited precision instead?

multiprecision fractions

Let’s make a multiprecision fraction in [0,1) from a series of raw random numbers. We started with rng() / W, which we can extend W bits at a time:

    rng()/W + rng()/(W*W) + rng()/(W*W*W) + ...

When we multiply by the limit we can distribute the multiplication through the sum:

    limit*rng()/W + limit*rng()/(W*W) + ...

Each term of the sum is represented as a 32.32 fixed-point number, same as before. To add them up, the upper half of the term on the right of a + is added to the lower half of the term on the left. But this can overflow, in which case we need to propagate a carry.

It’s possible for a carry to propagate all the way to the left and affect the integer part of our result. Ignoring these carries is where the bias comes from in the limited-precision multiply. Instead, we are going to handle carries properly.

unbounded but not infinite

We will add up our multiprecision sum from left to right, at each step adding the high half of the next term to the low half of the current term. There are three interesting cases:

In the middle case where we need to keep going, we don’t actually have to keep the result of the sum around (because we know it is W-1) so we can just discard it, forget about dividing by W, and re-do the loop exactly as if we were adding the second term again.

The middle case is also very unlikely, probability 1 / W, so the loop will almost never have more than one iteration.

early escape

There is a fourth case that allows us to skip getting the next term’s random number, because we know hi < limit.

bitwise hacks

OK, we are nearly there! But these additions can overflow our word size, and we would prefer to be able to test if there is a carry within our word size and without overflow. The following tests are all equivalent:

All of our carry propagation checks test if something added to our fractional part (what I called lo above) would overflow, so we can keep the fraction part inverted all the time.


So what I ended up with is:

    uint32_t really_divisionless(uint32_t limit) {
        uint64_t num = (uint64_t)rng() * (uint64_t)limit;
        uint32_t value = (uint32_t)(num >> 32);
        uint32_t frac = ~(uint32_t)(num);
        while(limit > frac) {
            num = (uint64_t)rng() * (uint64_t)limit;
            uint32_t more = (uint32_t)(num >> 32);
            if(more < frac) return(value + 0);
            if(more > frac) return(value + 1);
            frac = ~(uint32_t)(num);

It is not quite as brief as Lemire’s algorithm:

    uint32_t nearly_divisionless(uint32_t limit) {
        uint64_t num = (uint64_t)rng() * (uint64_t)limit;
        if((uint32_t)(num) < limit) {
            uint32_t residue = -limit % limit;
            while((uint32_t)(num) < residue)
                num = (uint64_t)rng() * (uint64_t)limit;
        return((uint32_t)(num >> 32));


I’ve written a simple benchmark harness to compare nearly-divisionless and really-divisionless random numbers.

On my Apple M1 Pro, I get:

                           nearly             really
         limit     rng()     time     rng()     time
            10 100000000 0.288708 100000000 0.272645 (+0.199934)
           100 100000000 0.270472 100000002 0.271755 (+0.019925)
          1000 100000004 0.270880 100000018 0.271528 (+0.001843)
         10000 100000195 0.271386 100000226 0.272936 (+0.000239)
        100000 100001588 0.273159 100002420 0.271645 (+0.000011)
       1000000 100022422 0.270636 100023153 0.273379 (+0.000001)
      10000000 100115390 0.274848 100232254 0.275551 (-0.000170)
     100000000 102259813 0.295489 102329850 0.308394 (+0.000040)
    1000000000 107371660 0.506537 123283286 0.554149 (+0.000011)

On an old Intel i5-2520M (2.50GHz) I get:

                           nearly             really
         limit     rng()     time     rng()     time
            10 100000000 0.527669 100000000 0.539901 (+0.199986)
           100 100000002 0.517446 100000004 0.539832 (+0.019943)
          1000 100000003 0.517961 100000029 0.540178 (+0.002025)
         10000 100000164 0.517207 100000203 0.540129 (+0.000231)
        100000 100001551 0.517465 100002365 0.541679 (+0.000082)
       1000000 100022311 0.518833 100023428 0.542076 (-0.000006)
      10000000 100115835 0.521978 100232914 0.546125 (-0.000093)
     100000000 102260934 0.560111 102328804 0.580683 (-0.000099)
    1000000000 107370473 0.925359 123280839 0.922405 (-0.000007)

The most notable thing is that the nearly-divisionless algorithm makes fewer calls to the random number generator. On further investigation (not shown in the results above) I found that both algorithms take the early-exit with about the same probability, but nearly-divisionless calls rng() much less often in the slow path.

The old Intel box is less able to handle the greater complexity of the really-divisionless code, so it performs significantly slower even when mostly taking the early-out.

The differences build up when the limit gets near W, when the algorithms have to hit their slow path more frequently. On the old Intel box, with relatively slow division, nearly-divisionless slows down more than really-divisionless, but the M1 Pro clearly has very fast divisions so nearly-divisionless stays ahead.


It was fun to investigate this really-divisionless algorithm. It’s easier to understand than Lemire’s nearly-divisionless algorithm, but the really-divisionless code ends up being longer and more branchy, so it has a hard time keeping up.

If you need unbiased numbers with a large limit, you are better off using an RNG that produces 64 bits at a time, so that these algorithms almost never need to hit their slow path. Then the relative performance of your rng() and your hardware’s division matters much less, and Lemire’s algorithm wins on brevity.

2022-03-25 – IETF 113, Vienna, second half

IETF 113 in Vienna is now nearly finished. In between meetings I found some time to look around the Uhrenmuseum and catch up with old friends.

As I mentioned in my previous post I have not been keeping up with IETF activity in the last year or two, so a lot of what I have been doing this week is getting an overview of current activity in the working groups that are most relevant to me. Here I am going to note a few things that caught my attention.


Wednesday morning was the measurement and analysis for protocols research group meeting, with several DNS-related presentations which provoked some interesting discussions.

I liked the performance measurements of DNS over different transport protocols, mainly because it was unsurprising: it is reassuring when measurements confirm predictions. They only measured the latency for the first query on a connection, so it will be interesting to see some follow-up work on multi-query connections.


The DNS privacy working group meeting was moved from Friday to a joint session with add on Thursday.

The current topic is (still) the difficult question of how to encrypt DNS traffic between recursive resolvers and authoritative servers. I think I am less grumpy about the working group’s problems with authenticated encryption than I used to be, but it is still frustrating.


The adaptive DNS discovery working group aims to give devices more information about a network’s DNS resolvers, such as support for encrypted transports.

There was a presentation about “split horizon DNS configuration” which seemed problematic to me in a number of ways. I am not sure I understood the mechanism properly - it’s to do with putting information about private domains in the public DNS - so I need to read the draft and make some thoughtful questions and comments.


On the mailing list, the brokenness of SHA-1 raised its ugly head again. I posted a message to remind everyone of what the SHA-1 break means for DNSSEC and that they should not be complacent about it.


I had a chat with my friend Reese about the TCP maintenance and minor extensions meeting. Reese works on TCP performance for Netflix, so the presentations about congestion control tweaks were particularly relevant to them. But the most entertaining talk was on tcpls, which is basically “what if QUIC, but TCP?” to which most people here answer, “why?!?!”


My discussion about data structures with folks from NLnet Labs continued over email. Jeroen Koekkoek gave me an idea for reducing locking and copy-on-write problems by getting rid of prev/next pointers in DNS record objects.


Now it is Friday afternoon, the meeting is over, and soon it will be time to leave for the journey home. It has been really nice to see people in person again, and get to know some of my new colleagues better.

And it isn’t long until the next trip, which will be UKNOF in Manchester next month.

2022-03-21 – IETF 113, Vienna, first half

This week I am at an IETF meeting in Vienna. Yesterday afternoon I took a few pictures while I walked around the city; this blog item is about my non-touristy activity.

isc mfld

I started work for isc.org at the beginning of this month. We dealt with a lot of the new employee logistics before my official start date, like getting a new work laptop and sorting out logins for email and other accounts. And Jeff Osborne (president of ISC) encouraged me to come to come to IETF113 in Vienna to meet him and some of the other ISC staff in person soon after my start.

And so, the MFLD track (many fine lunches and dinners) has mostly been chatting with new colleagues.


One of our aims with BIND is to revamp its more difficult parts, trying to make them simpler, and maybe also improving performance. I come to ISC with my qp-trie data structure, which I hope can replace BIND’s rbtdb, aka red-black tree database, BIND’s in-memory data structure for looking up DNS records.

Last summer I did some work on NSD to see how my DNS-trie variant would work in the context of a real (but relatively simple) DNS server, and further experiments to try out a new copy-on-write concurrency strategy. During the IETF hackathon on Saturday and Sunday I revisited my NSD fork to remind myself which branch does what. I discussed it with Benno Overeinder and Willem Toorop from NLnet Labs, and wrote up some notes for them and myself. They are planning to revisit NSD’s zone lookup data structures, and my qp-trie is one of the candidates.


I had a discussion with my new colleague Petr Špaček about testing BIND’s rbtdb refactoring, especially how to exercise things like locking that unit tests would not, and where we need better performance metrics. It’s early days yet, but it will be helpful to get benchmark baselines in place soon.

radix trees

Petr introduced me to Jan Včelák; they both worked at cz.nic in the past, and know of my qp-trie work in Knot DNS. We had a chat about fast lookup for IP address prefixes, and Jan pointed me to a paper describing a “Tree Bitmap” that he had implemented.

There has been a lot of research into fast IP address prefix lookup, because it is in the fast path of network routers, and they often use the same popcount packed array trick as my qp-trie. There is a radix tree in BIND which is used for access control list matching, and which could definitely be made a lot faster.


On Tuesday morning was the DNSOP working group meeting. I have not been paying much attention to DNSOP in the last couple of years, so my aim was mostly to get an idea of the current state of play. It turns out that the working group has been clearing its backlog, and is now considering new work for adoption. I came away with a couple of drafts that I should look at more closely.

First, the “DNS referral glue requirements” draft, where there are some questions on terminology that I have opinions about.

And second, “using service bindings with DANE” which gets into awkward questions about the semantics of aliases in the DNS, with particular reference to DANE SRV RFC 7673 which has my name at the top. I am not filled with joy at the prospect of arguing about this again, but I feel I ought to…

2022-02-23 – Addenbrooke's cataract clinic

Following the cataract clinic referral I got in September I spent this afternoon at Addenbrooke’s having my eyes examined. It was about as useful and informative as I hoped, though it took a long time. (left the house at 13:00, got back at 17:00)

the cataract

I have a large dense cataract, which means the whole lens must be replaced. This is a more difficult operation than usual: cataract surgery commonly deals with softer/smaller cataracts, which only need to replace the inner part of the lens; a simpler and less invasive procedure.


The new lens will be fixed focus.

It is difficult to say how much vision I will have. I am going to get a copy of my opticians records which should give the surgeons a better idea of how my vision might compare to my benchmark, i.e. the way it was before the lens clouded over.

(To be honest I was hoping for better than that! But I’ll be happy if I get some of my left-side peripheral vision back.)

There is a small risk that my visual cortex might have problems integrating the vision from both eyes; I got the impression that this was explained to prepare me for the possible range of outcomes.

There was (curiously) less discussion about surgical complications, though they are covered in the patient handouts.


In many cases cataract surgery is done under local anaesthetic, but because I am younger (so a general is less risky) and because it will be a longer procedure, we decided to book me in for a general.

Afterwards I will need someone (Rachel!) to escort me home.

And there is a course of (I think antibiotic?) eye drops 4x each day for (IIRC) 28 days, and a protective patch for the first few nights.

waiting time

probably about six months

machine that goes ping

There were some curious optometry devices that I haven’t seen before.

To get the prescription approximately right for the replacement lens, they measure the shape of the eye. (Longer eyes means more short sighted.) The current technology for this is optical, but it didn’t work on my eye (cataract too dense) so they used an older ultrasound thing that reminded me of a CRT light pen. It did, indeed, go ping when making a measurement.

Later on one of the surgeons used a more typical ultrasound device: I closed my eye and probed it through my eyelid and a generous smear of KY jelly.

There were various kinds of eye drops; my vision is still a bit blurry from dilated pupils!

2022-01-25 – nsnotifyd-2.0 released

I have made a new release of nsnotifyd a tiny DNS server that just listens for NOTIFY messages and runs a script when one of your zones changes.

This nsnotifyd-2.0 release adds support for TCP. The original nsnotifyd only supported UDP, which works fine with BIND, but Knot DNS sends NOTIFY messages over TCP.

As well as its new TCP support, the nsnotify client program that sends NOTIFY messages can now send bulk notifications for lots of zones, as well as being able to send notifications to lots of recipient servers.

Many thanks to Niels Haarbo and DK Hostmaster for requesting TCP support and for sponsoring my work to implement it.

I like nsnotifyd because I wrote it in 2015, and I haven’t touched it since then (until this month). I usually hear nothing about nsnotifyd, but occasionally someone mentions they are using it. For example the Guardian tech blog said of nsnotifyd, “like all good *nix tools it does one thing well”, and JP Mens called it “a gem of a utility”.

Happy users, no bug reports, software bliss.

2021-12-20 – Mac Mini Linux frustration

I have an old Mac Mini which I am repurposing as a Linux box. Last month I upgraded its hardware and posted pictures on Twitter. This weekend I upgraded the software.

However there is a compatibility problem that I have not managed to solve: Linux isn’t able to drive the display hardware as well as it should be able to.

read more ...

2021-11-30 – On the move

I have sent my formal resignation letter: I am leaving the University. From March, I will be working full-time for isc.org on BIND.

I said in my letter that it has been a privilege and a pleasure working for the University. That isn’t a polite fiction; I like the people I have worked with, and I have had interesting and rewarding things to do. But this is an exciting opportunity which came at the right time, so I allowed myself to be dragged away.

I have been a contributor to BIND for several years now. When I met Jeff Osborne (the president of ISC) in Amsterdam 3 years ago, he joked that I worked for them but he didn’t pay me :-) And I have met many of the ISC folks at various conferences. So in some ways it’s known territory.

But some things are going to be very different. The university has more than 1000 times the number of people as ISC, whereas ISC is almost entirely remote so spans over 1000 times the area. From my own point of view, I am looking forward to working in a bigger team than I have been.

This all happened fairly fast: Ondřej Surý (head of BIND development) suggested the new job to me only a few weeks ago, and I still find it hard to believe it’s really happening! We have talked over email about what Ondřej would like me to work on, and I’m looking forward to getting stuck in.

2021-09-22 – My cataract

As well as being very short-sighted in my right eye, I have a cataract in my left eye.

This post is going to discuss medical stuff that you might prefer to skip.

how it was

When I was very small, my grandmother saw me squinting and told my parents that they should get my eyes looked at; and that’s how we found out I had a congenital cataract.

It is (or was) weird enough to make opticians curious: it was like a little crystal in the middle of the lens. The effect on my vision was curious too.

In bright light, when my pupil was small, I could see almost nothing with my left eye, so I had no depth perception. This was problematic for school games involving flying balls, like cricket or tennis.

In low light I could see blurry shapes, but for a long time I thought my left eye was basically useless. But when I was about 20 we went on a trip to a theme park. I went in to a 3D cinema, not expecting to get much out of it, but I didn’t want to be separated from my friends. I was surprised that I did, in fact, see the exaggerated 3D effects of a dog putting its nose in my face and things like that. Fun!

(lack of) treatment

When I was still growing, I regularly (once or twice a year) went to Moorfields Eye Hospital in London to get the cataract examined by an expert. It never really changed much, and it wasn’t troublesome enough to justify surgery, especially since I was still growing and surgical techniques were improving, so it made sense to leave it.

I became short-sighted around puberty, and since my teenage years my eyes have just been checked by normal opticians, and my right eye messed around a lot more than my left. We continued to leave the cataract alone.

middle age

Now I am in my late 40s, and in the last few years I have started getting presbyopia. Many years ago I chose to get glasses with small frames so that the edges of the lenses were not too thick; now I peer under the lenses when I need to look closely at something.

At about the same time as I noticed I was getting long-sighted, my cataract also changed. Basically, the whole lens clouded over. This has made it obvious that I had a useful amount of peripheral vision in my left eye, because now I am much more frequently surprised by people or things that sneak up on my left.


I had a long-delayed eye test earlier this month during which we discussed my cataract. Cataract surgery is a lot better now than it was, and my cataract is a lot more annoying than it was, so I think it’s worth getting a specialist opinion on whether surgery will help more than it hurts.

To be honest the idea of it is freaky and scary, but rationally I know a lot of people have cataract surgery each year, and I hear less horrible things about it than I do about laser surgery for myopia.

Today I got a letter from Addenbrooke’s to say their triage team had rejected my referral, because the referral form was incomplete or unclear or sent to the wrong place or something. Vexing. So I emailed my optician and my GP with a list of things that I think need to be mentioned in the referral, with reference to some useful documents about the clinical criteria needed to justify it.

Hopefully the second try will actually get a specialist to agree to eyeball my eyeball…


A few weeks later some combination of my GP and optometrist managed to get the referral un-rejected, so I have an appointment with the Addenbrooke’s cataract clinic on 23rd February 2022. A bit of a wait, but I was told to expect it…

2021-06-23 – Page-based GC for qp-trie RCU

Here are some ideas for how a special-purpose allocator might improve a qp-trie implementation:

The downsides are:

Let’s dig in…

read more ...

2021-06-11 – Two highlights of 2021

video: Tim Hunkin

I first encountered Tim Hunkin via his 1980s TV series, The Secret Life Of Machines, in which he and Rex Garrod explained how various household electrical gadgets work. One of the remarkable experiments I remember from back then was their demo of how audio tape works: they rubbed rust on sticky tape, and used it to record and play back sound. Not very hi-fi, but it worked!

Tim Hunkin’s main occupation since then seems to have been as a maker of 3D mechanical cartoons. I call his machines “cartoons” because for a while he had a regular cartoon in the Observer, The Rudiments Of Wisdom, or, Almost Everything There Is To Know, and his 2D people and 3D people look very similar. He has an exhibit in the basement of the Science Museum in London, inspired by The Secret Life Of Machines, and amusement arcades in Southwold (the Under the Pier Show) and Holborn (Novelty Automation). His machines are surprising and funny!

So, this year Tim Hunkin has made a YouTube series called The Secret Life of Components in which he talks about the parts that he has used when making his machines - a different kind of component in each of the 8 episodes. It’s fascinating and informative.

And, as a bonus, he has also been releasing remastered versions of The Secret Life of Machines: 11 episodes so far, with a new one added each week, plus extra commentary with Tim Hunkin’s memories of filming them.

text: 50 Years of Text Games

I was lucky to find out about this newsletter/blog at about the time of its first article, and I have been looking forward to its weekly entries ever since.

It tends to focus on the people creating the games, the context in which they worked, with enough about the games to give you an idea of what they were like, and less about the techincal details. Each episode has an epilogue saying how you can play the game today.

What I love about it is how varied the creators are: the husband-and-wife team in Florida, the lesbian house in Cork, the Czech satirists - and the games too: history, romance, politics, horror.

I confess I’m not a keen player of adventure games, but the intersection of literature, technology, and play is so cool, and this series of articles showed me how much broader and deeper interactive fiction is than I was previously aware.

2021-04-10 – Miles and metres

Nautical miles and metres were both originally defined in the same way: each of them was the distance on the surface of the earth subtended by a particular angle. For nautical miles, the angle is a minute of arc; for the metre, it is 1/40,000,000 of a turn.

I was idly wondering about the history of this way of defining a unit of length, and how it led to these two particular units, and what (if any) influence they had on each other. The In Our Time episode on Pierre-Simon Laplace briefly discussed his involvement in the definition of the metric system in revolutionary France, which nudged me to actually do some reading.

read more ...

2021-03-22 – Preparing DNS names for faster radix tree lookups

The key™ ingredient of my DNS-trie is a way of re-encoding DNS names so that each byte has less than 48 possible values (so that the fan-out of each node is not too big), and normal hostname characters are encoded in one byte.

But this idea is also useful for speeding up DNS name lookups in general-purpose radix trees.

read more ...