The Dreamwidth feed has not caught this afternoon’s newly added
links, so I am not sure if it is actually working…
There is a lengthy backlog of links to be shared, which will be added
to the public log a few times each day.
The backlog will be drained in a random order, but the link log’s web
page and atom feed are sorted in date order, so the most-recently
shared link will usually not be the top link on the web page.
I might have to revise how the atom feed is implemented to avoid
confusing feed readers, but this will do for now.
The code has been rewritten from scratch in Rust, alongside the static
site generator that runs my blog. It’s less of a disgusting hack than
the ancient Perl link log that grew like some kind of fungus, but it
still lacks a sensible database and the code is still substantially
stringly typed. But, it works, which is the important thing.
edited to add …
I’ve changed the atom feed so that newly-added entries have both a
“published” date (which is the date displayed in the HTML, reflecting
when I saved the link) plus an “updated” date indicating when it was
later added to the public log.
I think this should make it a little more friendly to feed readers.
Back in December, George Michaelson posted an item on the APNIC blog titled
“That OSI model refuses to die”,
in reaction to Robert Graham’s “OSI Deprogrammer”
published in September. I had discussed the OSI Deprogrammer on Lobsters,
and George’s blog post prompted me to write an email. He and I agreed
that I should put it on my blog, but I did not get a round tuit until
now…
The main reason that OSI remains relevant is Cisco certifications require
network engineers to learn it. This makes OSI part of the common technical
vocab and basically unavoidable, even though (as Rob Graham correctly
argues) it’s deeply unsuitable.
It would be a lot better if the OSI reference model were treated as a
model of OSI in particular, not a model of networking in general. OSI
ought to be taught as an example alongside similar reference models of
protocol stacks that are actually in use.
One of OSI’s big problems is how it enshrines layering as the
architectural pattern, but there are other patterns that are at least
as important:
The hourglass narrow waist pattern, where a protocol stack provides a
simple abstraction and only really cares about how things work on one
side of the waist.
For instance, IP is a narrow waist and the Internet protocol stack
only really cares about the layers above it. And Ethernet’s addressing
and framing are another narrow waist, where IEEE 802 only really cares
about the layers below.
Recursive layering of entire protocol stacks. This occurs when
tunnelling, e.g. MPLS or IPSEC. It works in concert with narrow waists
that allow protocol stacks to be plugged together.
Tunneling starkly highlights what nonsense OSI’s fixed layers are,
leading to things like network engineers talking about “layer 2.5”
when talking about tunneling protocols that present Ethernet’s narrow
waist at their endpoints.
Speaking of Ethernet, it’s very poorly served by the OSI model. Ethernet
actually has three layers:
Then there’s WiFi which looks like Ethernet from IP’s point of view,
but is even more complicated. And almost everything non-ethernet has gone
away or been adapted to look more like ethernet…
Whereas OSI has too few lower layers, it has too many upper layers:
its session and presentation layers don’t correspond to anything in
the Internet stack. I think Rob Graham said that they came from IBM
SNA, and were related to terminal-related things like simplex or
block-mode, and character set translation. The closest the ARPANET /
Internet protocols get is Telnet’s feature negotiation; a lot of the
problem solved by the presentation layer is defined away by the
ASCII-only network virtual terminal. Graham also said that when people
assign Internet functions to layers 5 and 6, they do so based on the
names not based on how the OSI describes what they do.
One of the things that struck me when reading
Mike Padlipsky’s
Elements of Networking Style
is the amount of argumentation that was directed at terminal handling
back then. I guess in that light it’s not entirely surprising that OSI
would dedicate two entire layers to the problem.
Padlipsky also drew the
ARPANET layering as a fan
instead of a skyscraper, with intermediate layers shared by some but
not all higher-level protocols, e.g. the NVT used by Telnet, FTP,
SMTP. I expect if he were drawing the diagram later there might be
layers for 822 headers, MIME, SASL – though they are more like design
patterns than layers since they get used rather differently by SMTP,
NNTP, HTTP. The notion of pick-and-mix protocol modules seems more
useful than fixed layering.
Anyway, if I could magically fix the terminology, I would prefer
network engineers to talk about specific protocols (e.g. ethernet,
MPLS) instead of bogusly labelling them as layers (e.g. 2, 2.5). If
they happen to be working with a more diverse environment than usual
(hello DOCSIS) then it would be better to talk about sub-IP
protocol stacks. But that’s awkwardly sesquipedalian so I can’t see it
catching on.
In my previous entry I wrote about constructing a four-point egg,
using curcular arcs that join where their tangents are at 45°.
I wondered if I could do something similar with ellipses.
As before, I made an interactive ellipse workbench to
experiment with the problem. I got something working, but I have questions…
I think the instigation was a YouTube food video which led me to try
making popcorn at home from scratch with Nico. It was enormous fun!
And several weeks later it’s still really entertaining to make
(especially when a stray kernel pops after I take the lid off the pan,
catapaulting a few pieces in random directions!)
The Novelkeys Kailh Big Switch is a working MX-style
mechanical keyboard switch, but 4x larger in every dimension.
I realised at the weekend that the Big Switch should fit nicely in a
simple Lego enclosure. Because an MX-style switch is usually mounted
in a 14x14 mm square plate cutout, at 4x larger the Big Switch would
need a 56x56 mm mounting hole. Lego aficionados know that studs are
arranged on an 8x8 mm grid; this means the Big Switch hole is exactly
7x7 studs. A plan was hatched and a prototype was made.
In recent weeks I have been obsessed with carbonara: I have probably
been eating it far too frequently. Here’s my recipe. It works well for
1 - 3 people but gets unweildy at larger quantities.
ingredients
Rough quantities per person:
100g pasta
Spaghetti is traditional but I’ll use any shape.
50g streaky bacon
The traditional ingredient is guanciale; maybe I’ll try that one
day for a special occasion. I use 4 rashers of the thin-sliced
bacon that we get, which is 60g.
one large egg
Typically about 60g
40g grated parmesan
Again, for a special occasion I might try the traditional pecorino
romano. My rule of thumb is there should be as much cheese as half
the weight of the egg, but I usually round it up so there’s 100g
of mixture.
lots and lots of ground black pepper
method
Get the kettle on the boil and measure out the pasta.
While waiting for the kettle, shred the bacon into a pan.
I use kitchen scissors. (I ought to get our knives sharpened.)
The pan needs to be big enough to stir everything together at the end.
Get the pasta cooking in another pan.
Don’t salt the water, there’s plenty in the bacon and cheese.
Use relatively little water so that it becomes starchy while
cooking. The pasta water will loosen and stabilize the sauce.
Fry the bacon until it has taken on some nice colour.
I bash it about with a wooden spoon to make sure the bits have
separated. It will probably be done before the pasta, which is
fine. Turn off the heat and let it rest.
When the cooking is under control, break the egg(s) into a bowl,
and grate the cheese into the eggs.
I do this on top of the weighing scales.
Grind lots of pepper onto the cheese and egg and mix them all
together. It will make a thick sludge.
When the pasta is done to your liking, use a slotted spoon to
transfer it to the pan with the bacon.
I find a slotted spoon carries a nice quantity of water with the
pasta. Many of the recipes I have seen say that the pasta should
be slightly under-done at this point, because it will finish
cooking in the sauce, but that doesn’t work for me.
Mix the bacon and pasta and deglaze the pan.
It should be cool enough after this point that the egg will not
curdle immediately when you add it.
Add the egg and cheese and mix over a gentle heat.
As you stir, the cheese will melt and the sauce will become smooth
and creamy. If it’s too thick, add a tablespoon of pasta water.
If it’s too runny, boost the heat to help the egg thicken up.
Dish up and serve.
Best eaten immediately: it’s nicest hot but it cools relatively fast.
I’m a beginner at PCB design, or rather, I haven’t made a PCB since I
was at school 30 years ago, and a lot has changed since then! So my
aim for Keybird69’s PCB was to learn my way around the design,
manufacturing, and assembly process.
My Keybird69 uses LEGO in its enclosure, in an unconventional way.
story time
Two years ago I planned to make a typical
acrylic sandwich case
for HHKBeeb, in the style of the BBC Micro’s black and yellowish beige
case. But that never happened because it was too hard to choose a
place to get the acrylic cut so my spec.
Howver, I could not work out how to make a case that is nice and
slender and into which the parts would fit. It is possible – the
KBDcraft Adam solves the problem nicely, and by all reports it’s
pretty good as a keyboard, not just a gimmick.
To make the PCB design easier, I am using a Waveshare RP2040-Tiny.
It’s more flexible than the usual dev boards used in custom keyboards
because it has a separate daughterboard for the USB socket, but I had
the devil of a time working out how to make it fit with LEGO.
brainwaves
Instead of using LEGO for the base, use FR-4, same as the
switch mounting plate;
There isn’t enough space for SNOT so I can’t use LEGO studs to
attach both the top and bottom of the case; why not use non-LEGO
fasteners instead?
That will need through-holes, so maybe LEGO Technic beams will work?
Maybe the fasteners I got for the HHKBeeb case will work?
There’s plenty of material online about the bewildering variety of keycaps,
eg,
eg,
but I learned a few things that surprised me when working on Keybird69.
A proper Unix keyboard layout must have escape next to 1 and control
next to A.
Compared to the usual ANSI layout, backquote is displaced from its
common position next to 1. But a proper Unix keyboard should cover
the entire ASCII repertoire, 94 printing characters on 47 keys, plus
space, in the main block of keys.
To make a place for backquote, we can move delete down a row so it
is above return, and put backslash and backquote where delete
was.
(Aside: the delete key emits the delete character, ASCII 127, and the
return key emits the carriage return character, ASCII 13. That is why
I don’t call them backspace and enter.)
Personally, I prefer compact keyboards so I don’t have to reach too
far for the mouse, but I can’t do without arrow keys. So a
65% keyboard size
(5 rows, 16 keys wide) is ideal.
If you apply the Unix layout requirements to a typical ANSI 68-key 65%
layout, you get a 69-key layout. I call it unix69. (1969 was also the
year Unix started.)
I have arranged the bottom row modifiers for Emacs: there are left and
right meta keys and a right ctrl key for one-handed navigation.
Meta is what the USB HID spec calls the “GUI” key; it sometimes has
a diamond icon legend. Like the HHKB, and like Unix
workstations made by
Apple
and Sun, the meta keys are
either side of the space bar.
There are left and right fn keys for things that don’t have
dedicated keys, e.g. fn+arrows for page up/page down, home,
end. The rightmost column has user-programmable macro keys, which I
use for window management.
ANSI 65% keyboards have caps lock where control should be.
They have an ugly oversized backslash and lack a good place for backquote.
The right column is usually wasted on fixed-function keys.
It’s common for 65% keyboards to have 67 or 68 keys, the missing key
making a gap between the modifiers and arrow keys on the bottom row.
I prefer to have more rather than fewer modifier keys.
Unfortunately it has caps lock where control should be. Its right
column is wasted on fixed-function keys (though the keyboards are
reprogrammable so it’s mainly a keycap problem).
On the bottom row, True Fox has two modifers and a gap between space
and arrows, whereas unix69 has three modifiers and no gap.
The Happy hacking keyboard layout is OK for a 60% Unix layout.
However it lacks a left fn key, and lacks space for full-size arrow
keys, so I prefer a 65% layout.
Owing to the difficulty of getting keycaps with exactly the legends I
would like, the meta keys on my keybird69 are labelled super
and the delete key is labelled backspace. I used F1 to F4
keycaps for the macro keys, tho they are programmed to generate F13
to F16 which are set up as
Hammerspoon hot keys.
But otherwise keybird69 is a proper unix69 keyboard.
Another keyboard!
HHKbeeb
A couple of years ago I made a BBC Micro tribute keyboard in the runup
to the beeb’s 40th anniversary. I called it HHKBeeb:
I planned to make a beeb-style acrylic sandwich case, but it was too
hard to choose a place to get the acrylic cut, so that never happened.
In practice I find 60% keyboards (like the
Happy Hacking Keyboard) too small –
I need an arrow cluster. So I used the HHKBeeb with a
Keybow 2040 macro pad
to give me arrows and a few function keys for moving windows around.
Keybird69
My new keyboard is for a Finch and it has 69 keys, so it’s called Keybird69.
(I was surprised that this feeble pun has not already been used by any
of the keyboards known to
QMK or
VIA!)
The HHKBeeb and Keybow 2040 never stayed put, so I would often get
my fingers on the wrong keys when moving my right hand some
varying distance between them;
Although I like the HHKBeeb’s
ECMA-23 bit-paired layout
in theory, in practice it’s super annoying to switch between it
and my laptop’s more normal layout;
I had a cunning idea for using LEGO in the enclosure, which avoid
the problem of getting acrylic cut to spec;
I have been mildly obsessed with compact keyboards practically
forever, but back in the 1990s there were no good options available to
buy, so I made do without.
The first small keyboard I liked was the (now discontinued) HHKB Lite
2,
which has an arrow cluster unlike the pure HHKB. I have a couple of
these lurking in the Boxes Of Stuff in the corner. But I’m not a huge
fan of the limited modifiers, or the Topre HHKB Lite 2 key
switches (they’re a bit mushy), or the styling of the HHKB case.
Correction: the HHKB Lite 2 did not actually use Topre switches.
But then Apple lost the plot with its input devices, so I thought I
should plan to wean myself off. And in the mean time, the custom
keyboard scene had flourished into a vibrant ecosystem of open source
hardware and software.
So instead of relying on someone else to make a keyboard I like, I
could make one myself! My own PCB and switch plate, designed for just
the layout I want.
And with QMK
open source firmware, I can make good use of the fn key that was so
disappointingly unconfigurable on the HHKB and Apple keyboards.
what’s next
I’m planning to write some more notes about various details of the design:
Here’s an addendum about an alternative model of uniformity.
There are 2^62 double precision floats between 0.0 and 1.0, but as I
described before under “the problem”, they are not distributed
uniformly:
the smaller ones are much denser. Because of this, there are two ways
to model a uniform distribution using floating point numbers.
Both algorithms in my previous note use a discrete model: the
functions return one of 2^52 or 2^53 evenly spaced numbers.
You can also use a continuous model, where you imagine a uniformly
random real number with unbounded precision, and return the closest
floating point result. This can have better behaviour if you go on to
transform the result to model different distrbutions (normal, poisson,
exponential, etc.)
Here are a couple of algorithms for generating uniformly distributed
floating point numbers 0.0<=n<1.0 using an unbiased
random bit generator and IEEE 754 double precision arithmetic. Both of
them depend on details of how floating point numbers work, so before
getting into the algorithms I’ll review IEEE 754.
The first algorithm uses bit hacking and type punning. The second uses
a hexadecimal floating point literal. They are both fun and elegant in
their own ways, but these days the second one is better.
Last week I was interested to read about the proposed math/rand/v2
for Golang’s standard library. It mentioned a new-ish flavour
of PCG random number generator which I had not previously encountered,
called PCG64 DXSM. This blog post collects what I have learned about
it. (I have not found a good summary elsewhere.)
At the end there is source code for PCG64 DXSM that you can freely
copy and use.
This week I was in Rotterdam for a RIPE meeting.
On Friday morning I gave a lightning talk called where does my
computer get the time from?
The RIPE meeting website has a copy of my slides and a video of the
talk; this is a blogified low-res version of the slides with a rough
and inexact transcript.
About 50 people gathered with several ideas for potential projects: things
like easier DNSSEC provisioning, monitoring DNS activity in the network,
what is the environmental cost of the DNS, …
At the start of the weekend we were asked to introduce ourselves and
say what our goals were. My goal was to do something different from my
day job working on BIND. I was successful, tho I did help some others
out with advice on a few of BIND’s obscurities.
The team I joined was very successful at producing a working prototype
and a cool demo.
Since I started work at ISC my main project has been to adapt the
NSD prototype into a qp-trie for use in BIND. The ultimate aim is to
replace BIND’s red-black tree database, its in-memory store of DNS
records.
The core of the design is still close to what I sketched in
2021 and implemented in NSD, so these notes are mostly about
what’s different, and the mistakes I made along the way…
Chris Wellons posted a good review of why large chunks of the C
library are terrible,
especially if you are coding on Windows - good fun if you like staring
into the abyss. He followed up with let’s write a
setjmp which is fun in a
more positive way. I was also pleased to learn about
__builtin_longjmp! There’s a small aside in this article about the
signal mask, which skates past another horrible abyss - which might
even make it sensible to DIY longjmp.
Some of the nastiness can be seen in the POSIX rationale for
sigsetjmp which says that on BSD-like systems, setjmp and
_setjmp correspond to sigsetjmp and setjmp on System V Unixes.
The effect is that setjmp might or might not involve a system call
to adjust the signal mask. The syscall overhead might be OK for
exceptional error recovery, such as Chris’s arena out of memory
example, but it’s likely to be more troublesome if you are
implementing coroutines.
But why would they need to mess with the signal mask? Well, if you are
using BSD-style signals or you are using sigaction correctly, a
signal handler will run with its signal masked. If you decide to
longjmp out of the handler, you also need to take care to unmask the
signal. On BSD-like systems, longjmp does that for you.
The problem is that longjmp out of a signal handler is basically
impossible to do correctly. (There’s a whole flamewar in the wg14
committee documents on this subject.) So this is another example of
libc being optimized for the unusual, broken case at the cost of the
typical case.