In my previous entry I wrote about constructing a four-point egg,
using curcular arcs that join where their tangents are at 45°.
I wondered if I could do something similar with ellipses.
As before, I made an interactive ellipse workbench to
experiment with the problem. I got something working, but I have questions…
I think the instigation was a YouTube food video which led me to try
making popcorn at home from scratch with Nico. It was enormous fun!
And several weeks later it’s still really entertaining to make
(especially when a stray kernel pops after I take the lid off the pan,
catapaulting a few pieces in random directions!)
The Novelkeys Kailh Big Switch is a working MX-style
mechanical keyboard switch, but 4x larger in every dimension.
I realised at the weekend that the Big Switch should fit nicely in a
simple Lego enclosure. Because an MX-style switch is usually mounted
in a 14x14 mm square plate cutout, at 4x larger the Big Switch would
need a 56x56 mm mounting hole. Lego aficionados know that studs are
arranged on an 8x8 mm grid; this means the Big Switch hole is exactly
7x7 studs. A plan was hatched and a prototype was made.
In recent weeks I have been obsessed with carbonara: I have probably
been eating it far too frequently. Here’s my recipe. It works well for
1 - 3 people but gets unweildy at larger quantities.
ingredients
Rough quantities per person:
100g pasta
Spaghetti is traditional but I’ll use any shape.
50g streaky bacon
The traditional ingredient is guanciale; maybe I’ll try that one
day for a special occasion. I use 4 rashers of the thin-sliced
bacon that we get, which is 60g.
one large egg
Typically about 60g
40g grated parmesan
Again, for a special occasion I might try the traditional pecorino
romano. My rule of thumb is there should be as much cheese as half
the weight of the egg, but I usually round it up so there’s 100g
of mixture.
lots and lots of ground black pepper
method
Get the kettle on the boil and measure out the pasta.
While waiting for the kettle, shred the bacon into a pan.
I use kitchen scissors. (I ought to get our knives sharpened.)
The pan needs to be big enough to stir everything together at the end.
Get the pasta cooking in another pan.
Don’t salt the water, there’s plenty in the bacon and cheese.
Use relatively little water so that it becomes starchy while
cooking. The pasta water will loosen and stabilize the sauce.
Fry the bacon until it has taken on some nice colour.
I bash it about with a wooden spoon to make sure the bits have
separated. It will probably be done before the pasta, which is
fine. Turn off the heat and let it rest.
When the cooking is under control, break the egg(s) into a bowl,
and grate the cheese into the eggs.
I do this on top of the weighing scales.
Grind lots of pepper onto the cheese and egg and mix them all
together. It will make a thick sludge.
When the pasta is done to your liking, use a slotted spoon to
transfer it to the pan with the bacon.
I find a slotted spoon carries a nice quantity of water with the
pasta. Many of the recipes I have seen say that the pasta should
be slightly under-done at this point, because it will finish
cooking in the sauce, but that doesn’t work for me.
Mix the bacon and pasta and deglaze the pan.
It should be cool enough after this point that the egg will not
curdle immediately when you add it.
Add the egg and cheese and mix over a gentle heat.
As you stir, the cheese will melt and the sauce will become smooth
and creamy. If it’s too thick, add a tablespoon of pasta water.
If it’s too runny, boost the heat to help the egg thicken up.
Dish up and serve.
Best eaten immediately: it’s nicest hot but it cools relatively fast.
I’m a beginner at PCB design, or rather, I haven’t made a PCB since I
was at school 30 years ago, and a lot has changed since then! So my
aim for Keybird69’s PCB was to learn my way around the design,
manufacturing, and assembly process.
My Keybird69 uses LEGO in its enclosure, in an unconventional way.
story time
Two years ago I planned to make a typical
acrylic sandwich case
for HHKBeeb, in the style of the BBC Micro’s black and yellowish beige
case. But that never happened because it was too hard to choose a
place to get the acrylic cut so my spec.
Howver, I could not work out how to make a case that is nice and
slender and into which the parts would fit. It is possible – the
KBDcraft Adam solves the problem nicely, and by all reports it’s
pretty good as a keyboard, not just a gimmick.
To make the PCB design easier, I am using a Waveshare RP2040-Tiny.
It’s more flexible than the usual dev boards used in custom keyboards
because it has a separate daughterboard for the USB socket, but I had
the devil of a time working out how to make it fit with LEGO.
brainwaves
Instead of using LEGO for the base, use FR-4, same as the
switch mounting plate;
There isn’t enough space for SNOT so I can’t use LEGO studs to
attach both the top and bottom of the case; why not use non-LEGO
fasteners instead?
That will need through-holes, so maybe LEGO Technic beams will work?
Maybe the fasteners I got for the HHKBeeb case will work?
There’s plenty of material online about the bewildering variety of keycaps,
eg,
eg,
but I learned a few things that surprised me when working on Keybird69.
A proper Unix keyboard layout must have escape next to 1 and control
next to A.
Compared to the usual ANSI layout, backquote is displaced from its
common position next to 1. But a proper Unix keyboard should cover
the entire ASCII repertoire, 94 printing characters on 47 keys, plus
space, in the main block of keys.
To make a place for backquote, we can move delete down a row so it
is above return, and put backslash and backquote where delete
was.
(Aside: the delete key emits the delete character, ASCII 127, and the
return key emits the carriage return character, ASCII 13. That is why
I don’t call them backspace and enter.)
Personally, I prefer compact keyboards so I don’t have to reach too
far for the mouse, but I can’t do without arrow keys. So a
65% keyboard size
(5 rows, 16 keys wide) is ideal.
If you apply the Unix layout requirements to a typical ANSI 68-key 65%
layout, you get a 69-key layout. I call it unix69. (1969 was also the
year Unix started.)
I have arranged the bottom row modifiers for Emacs: there are left and
right meta keys and a right ctrl key for one-handed navigation.
Meta is what the USB HID spec calls the “GUI” key; it sometimes has
a diamond icon legend. Like the HHKB, and like Unix
workstations made by
Apple
and Sun, the meta keys are
either side of the space bar.
There are left and right fn keys for things that don’t have
dedicated keys, e.g. fn+arrows for page up/page down, home,
end. The rightmost column has user-programmable macro keys, which I
use for window management.
ANSI 65% keyboards have caps lock where control should be.
They have an ugly oversized backslash and lack a good place for backquote.
The right column is usually wasted on fixed-function keys.
It’s common for 65% keyboards to have 67 or 68 keys, the missing key
making a gap between the modifiers and arrow keys on the bottom row.
I prefer to have more rather than fewer modifier keys.
Unfortunately it has caps lock where control should be. Its right
column is wasted on fixed-function keys (though the keyboards are
reprogrammable so it’s mainly a keycap problem).
On the bottom row, True Fox has two modifers and a gap between space
and arrows, whereas unix69 has three modifiers and no gap.
The Happy hacking keyboard layout is OK for a 60% Unix layout.
However it lacks a left fn key, and lacks space for full-size arrow
keys, so I prefer a 65% layout.
Owing to the difficulty of getting keycaps with exactly the legends I
would like, the meta keys on my keybird69 are labelled super
and the delete key is labelled backspace. I used F1 to F4
keycaps for the macro keys, tho they are programmed to generate F13
to F16 which are set up as
Hammerspoon hot keys.
But otherwise keybird69 is a proper unix69 keyboard.
Another keyboard!
HHKbeeb
A couple of years ago I made a BBC Micro tribute keyboard in the runup
to the beeb’s 40th anniversary. I called it HHKBeeb:
I planned to make a beeb-style acrylic sandwich case, but it was too
hard to choose a place to get the acrylic cut, so that never happened.
In practice I find 60% keyboards (like the
Happy Hacking Keyboard) too small –
I need an arrow cluster. So I used the HHKBeeb with a
Keybow 2040 macro pad
to give me arrows and a few function keys for moving windows around.
Keybird69
My new keyboard is for a Finch and it has 69 keys, so it’s called Keybird69.
(I was surprised that this feeble pun has not already been used by any
of the keyboards known to
QMK or
VIA!)
The HHKBeeb and Keybow 2040 never stayed put, so I would often get
my fingers on the wrong keys when moving my right hand some
varying distance between them;
Although I like the HHKBeeb’s
ECMA-23 bit-paired layout
in theory, in practice it’s super annoying to switch between it
and my laptop’s more normal layout;
I had a cunning idea for using LEGO in the enclosure, which avoid
the problem of getting acrylic cut to spec;
I have been mildly obsessed with compact keyboards practically
forever, but back in the 1990s there were no good options available to
buy, so I made do without.
The first small keyboard I liked was the (now discontinued) HHKB Lite
2,
which has an arrow cluster unlike the pure HHKB. I have a couple of
these lurking in the Boxes Of Stuff in the corner. But I’m not a huge
fan of the limited modifiers, or the Topre HHKB Lite 2 key
switches (they’re a bit mushy), or the styling of the HHKB case.
Correction: the HHKB Lite 2 did not actually use Topre switches.
But then Apple lost the plot with its input devices, so I thought I
should plan to wean myself off. And in the mean time, the custom
keyboard scene had flourished into a vibrant ecosystem of open source
hardware and software.
So instead of relying on someone else to make a keyboard I like, I
could make one myself! My own PCB and switch plate, designed for just
the layout I want.
And with QMK
open source firmware, I can make good use of the fn key that was so
disappointingly unconfigurable on the HHKB and Apple keyboards.
what’s next
I’m planning to write some more notes about various details of the design:
Here’s an addendum about an alternative model of uniformity.
There are 2^62 double precision floats between 0.0 and 1.0, but as I
described before under “the problem”, they are not distributed
uniformly:
the smaller ones are much denser. Because of this, there are two ways
to model a uniform distribution using floating point numbers.
Both algorithms in my previous note use a discrete model: the
functions return one of 2^52 or 2^53 evenly spaced numbers.
You can also use a continuous model, where you imagine a uniformly
random real number with unbounded precision, and return the closest
floating point result. This can have better behaviour if you go on to
transform the result to model different distrbutions (normal, poisson,
exponential, etc.)
Here are a couple of algorithms for generating uniformly distributed
floating point numbers 0.0<=n<1.0 using an unbiased
random bit generator and IEEE 754 double precision arithmetic. Both of
them depend on details of how floating point numbers work, so before
getting into the algorithms I’ll review IEEE 754.
The first algorithm uses bit hacking and type punning. The second uses
a hexadecimal floating point literal. They are both fun and elegant in
their own ways, but these days the second one is better.
Last week I was interested to read about the proposed math/rand/v2
for Golang’s standard library. It mentioned a new-ish flavour
of PCG random number generator which I had not previously encountered,
called PCG64 DXSM. This blog post collects what I have learned about
it. (I have not found a good summary elsewhere.)
At the end there is source code for PCG64 DXSM that you can freely
copy and use.
This week I was in Rotterdam for a RIPE meeting.
On Friday morning I gave a lightning talk called where does my
computer get the time from?
The RIPE meeting website has a copy of my slides and a video of the
talk; this is a blogified low-res version of the slides with a rough
and inexact transcript.
About 50 people gathered with several ideas for potential projects: things
like easier DNSSEC provisioning, monitoring DNS activity in the network,
what is the environmental cost of the DNS, …
At the start of the weekend we were asked to introduce ourselves and
say what our goals were. My goal was to do something different from my
day job working on BIND. I was successful, tho I did help some others
out with advice on a few of BIND’s obscurities.
The team I joined was very successful at producing a working prototype
and a cool demo.
Since I started work at ISC my main project has been to adapt the
NSD prototype into a qp-trie for use in BIND. The ultimate aim is to
replace BIND’s red-black tree database, its in-memory store of DNS
records.
The core of the design is still close to what I sketched in
2021 and implemented in NSD, so these notes are mostly about
what’s different, and the mistakes I made along the way…
Chris Wellons posted a good review of why large chunks of the C
library are terrible,
especially if you are coding on Windows - good fun if you like staring
into the abyss. He followed up with let’s write a
setjmp which is fun in a
more positive way. I was also pleased to learn about
__builtin_longjmp! There’s a small aside in this article about the
signal mask, which skates past another horrible abyss - which might
even make it sensible to DIY longjmp.
Some of the nastiness can be seen in the POSIX rationale for
sigsetjmp which says that on BSD-like systems, setjmp and
_setjmp correspond to sigsetjmp and setjmp on System V Unixes.
The effect is that setjmp might or might not involve a system call
to adjust the signal mask. The syscall overhead might be OK for
exceptional error recovery, such as Chris’s arena out of memory
example, but it’s likely to be more troublesome if you are
implementing coroutines.
But why would they need to mess with the signal mask? Well, if you are
using BSD-style signals or you are using sigaction correctly, a
signal handler will run with its signal masked. If you decide to
longjmp out of the handler, you also need to take care to unmask the
signal. On BSD-like systems, longjmp does that for you.
The problem is that longjmp out of a signal handler is basically
impossible to do correctly. (There’s a whole flamewar in the wg14
committee documents on this subject.) So this is another example of
libc being optimized for the unusual, broken case at the cost of the
typical case.
McKenney goes on to propose an RCU classification system based on the
API an implementation provides to its users. (I am curious that the
criteria do not involve how RCU works.)
Here’s how I would answer the questions for QSBR in BIND:
Are there explicit RCU read-side markers?
No, it relies on libuv callbacks to bound the lifetime of a
read-side critical section.
Are grace periods computed automatically?
Yes. There is an internal isc__qsbr_quiescent_state() function,
but that mainly exists to separate the QSBR code from the event
loop manager, and for testing purposes, not for use by
higher-level code.
Are there synchronous grace-period-wait APIs?
No. (Because they led me astray when designing a data structure to
use RCU.)
Are there asynchronous grace-period-wait APIs?
Yes, but instead of one-shot call_rcu(), a subsystem (such as
the qp-trie code) registers a permanent callback
(isc_qsbr_register()), and notifies the QSBR when there is work
for the callback to do (isc_qsbr_activate()). This avoids having
to allocate a thunk on every modification, and it automatically
coalesces reclamation work.
If so, are there callback-wait APIs?
No. At the moment, final cleanup work is tied to event loop
teardown.
Are there polled grace-period-wait APIs?
No.
Are there multiple grace-period domains?
One per event loop manager, and there’s only one loopmgr.
Previously, I wrote about implementing safe memory reclamation for my
qp-trie code in BIND. I have now got it working with a
refactored qp-trie that has been changed to support asynchronous
memory reclamation - working to the point where I can run some
benchmarks to compare the performance of the old and new versions.