I’m a beginner at PCB design, or rather, I haven’t made a PCB since I was at school 30 years ago, and a lot has changed since then! So my aim for Keybird69’s PCB was to learn my way around the design, manufacturing, and assembly process.
My Keybird69 uses LEGO in its enclosure, in an unconventional way.
Two years ago I planned to make a typical acrylic sandwich case for HHKBeeb, in the style of the BBC Micro’s black and yellowish beige case. But that never happened because it was too hard to choose a place to get the acrylic cut so my spec.
My idea for using LEGO in a keyboard case was originally inspired by James Munns, who uses LEGO for mounting PCBs, including at least one keyboard.
Howver, I could not work out how to make a case that is nice and slender and into which the parts would fit. It is possible – the KBDcraft Adam solves the problem nicely, and by all reports it’s pretty good as a keyboard, not just a gimmick.
To make the PCB design easier, I am using a Waveshare RP2040-Tiny. It’s more flexible than the usual dev boards used in custom keyboards because it has a separate daughterboard for the USB socket, but I had the devil of a time working out how to make it fit with LEGO.
Instead of using LEGO for the base, use FR-4, same as the switch mounting plate;
There isn’t enough space for SNOT so I can’t use LEGO studs to attach both the top and bottom of the case; why not use non-LEGO fasteners instead?
That will need through-holes, so maybe LEGO Technic beams will work?
Maybe the fasteners I got for the HHKBeeb case will work?
I wanted the fasteners for the HHKBeeb case to be as flat as possble; but acrylic does not work well with countersunk screws. Instead I looked for fasteners that protrude as little as possible.
For machine screws, I found the magic phrase is “ultra thin super flat wafer head”. These typically protrude 1mm or less, whereas the more common button head or pan head protrude about 2mm or more.
I also discovered rivet nuts. They are designed to be inserted into a sheet metal panel and squashed so that they grip the panel firmly. But I just wanted them for their thin flange, less than 1mm.
The usual fasteners for a sandwich case are machine screws inserted top and bottom, with a standoff in between. But Keybird69 uses machine screws in the top and rivet nuts in the bottom.
I’m using M3 rivet nuts and machine screws. The outer diameter of the rivet nuts is 5mm; the inner diameter of the Technic holes is 4.8mm. Fortunately the beams are made from flexible ABS, so the rivet nuts can be squeezed in and make a firm press fit. They can be pushed out again with a LEGO Brick Separator.
Many dimensions of the keyboard are determined by the Cherry MX keyswitch de-facto standard.
The switch mounting plate must be about 1.5mm – the standard PCB thickness of 1.6mm works fine.
The top of the PCB is 5mm below the top of the plate. The bottom of the PCB is also 5mm below the bottom of the plate because they are the same thickness. (Usually.)
The electronics are soldered to the bottom of the PCB.
A LEGO Technic beam is 8mm high (along the length of its holes).
The bodies of the switches and the PCB use 5mm of the beam height, leaving 3mm for the electronics. Plenty of space!
The height of the enclosure is 8 + 1.6 + 1.6 = 11.2 mm, which is pretty slender.
HHKBeeb’s generic case uses 10mm acrylic so it’s 2mm thicker, and the NightFox is about the same.
The Waveshare RP2040-Tiny daughterboard is problematic: its PCB is 1mm thick, and the USB-C receptacle is about 3.5mm high. It also has a couple of buttons for resetting or reflashing the RP2040, and they are a similar height.
I could not find a comfortable way to make space for it by cutting away part of the PCB to give it enough headroom. Then I had another brainwave!
I am not constrained by LEGO’s rectilinear grid, so I could make space by angling the back of the case outwards slightly. The middle of the back of the case has the extra few milimetres needed for the USB daughterboard.
If you look closely at the picture above, behind the USB-C receptacle and the M2 nuts, you can see the whiteish top of one of the buttons, and behind that is the beige textured edge of the PCB.
(Also, I need to turn the beams round so that the injection moulding warts are not visible!)
LEGO studs use an 8mm grid. Keys are on a 3/4 in grid, or 19.05mm.
Keybird69 is 5 keys deep, which is slightly less than 12 LEGO studs.
It is 16 keys wide, which is slightly more than 38 LEGO studs. Three LEGO Technic 13 hole beams are 39 studs long.
The front and sides of Keybird69 are enclosed with 5 beams of 13 holes each, which stick out half a stud past the block of keys. They meet at the corners so that the tangent is 45° where the rounded ends of the beams are closest.
This arrangement leaves about 1mm clearance around the PCB. Spacious.
Technic beams are not as square in cross-section as you might expect. Their height (through the holes) is 8mm, whereas their width (across the holes) is 7.2mm. In Keybird69 I left 0.4mm gap between them – I could have cut that down to 0.2mm without problems.
I used a 10mm radius of curvature for the corners. Apart from where the beams meet, the switch plate and base plate are very nicely flush with the beams.
I tried using a Sharpie permanent marker to blacken the edges of my Keybow 2040, but the ink did not stick very well. On Keybird69 I used an acrylic paint marker pen, which worked much better. Compare the raw fibreglass beige of the edges in the picture above to the black edges below.
One thing that probably isn’t clear from the pictures is that the FR-4 plates have an unexpectedly lovely matte reflective quality. I think it might be because the black solder mask is not completely opaque so the layer of copper underneath gives it a bit of shine.
I am also getting some black 13 hole Technic beams to replace the dark grey ones, gotta make sure the dust shows up clearly on every possible surface!
There’s plenty of material online about the bewildering variety of keycaps, eg, eg, but I learned a few things that surprised me when working on Keybird69.
I found out that the remaining stock of Matteo Spinelli’s NightFox keyboards were being sold off cheap because of a manufacturing defect. I grabbed one to find out what it’s like, because its “True Fox” layout is very similar to the unix69 layout I wanted.
My NightFox turned out to have about three or five unreliable keyswitches, which meant it was just about usable – tho the double Ts and unexpected Js rapidly became annoying.
But it was usable enough that I was able to learn some useful things from it.
The black-on-grey keycaps look cool, but they are basically illegible. (The picture above exaggerates their albedo and contrast.) This is a problem for me, especially while I was switching from the HHKBeeb ECMA-23-ish layout to an ANSI-ish TrueFox-ish unix68 layout.
Fortunately I learned this before making the mistake of buying some fancy black-on-grey keycaps.
I had seen a few keycap sets with extra up arrows, which puzzled me (For example.) The NightFox came with an extra up arrow, and eventually I twigged that it makes the profile of the arrow cluster a bit nicer.
Usually, in a sculpted keycap profile (where each row of keycaps has a differently angled top surface) the bottom two rows have the same angle, sloping away from the typist. This means the up arrow key slopes away from the other arrows on the row below.
The extra up arrow keys typically match the tab row, which is flat or angled slightly towards the typist. This gives the arrow cluster a more dishy feeling.
Unfortunately the keycaps I ordered do not have extra up arrow keys with tab row angle as an option. I did not realise until after I ordered them that I could have got a similar effect by using a reversed down arrow as an up arrow – it makes a sharper angle, but still feels nicer. So I’m using a reversed arrow key for Keybow69’s up button and my up/down legends both point the same way.
Some keycap sets have multiple page up / page down / home / end keys with different row profiles so that people with 65% and 75% keyboards can rearrange the right column of keys. (For example.)
Instead of the superfluous navigation keys, I used the NightFox novelty keycaps on my keyboard. (You can see the ANY KEY, the cute fox logo, etc. in the picture above.) These all had a top row profile, and at first I thought this was an ugly compromise.
But it turns out that the difference in height between the right column and the main block of keys is really useful for helping to keep my fingers in the right places. It makes me less likely to accidentally warp a window when I intend to delete a character.
The mismatched angle of the up arrow key is similarly helpful. Matt3o added a gap next to the arrow keys in his True Fox design to make the arrow keys easier to locate, but I think that isn’t necessary with an out-of-profile up arrow (which is also one of Matt3o’s favourite features).
I previously thought I wanted a uniform keycap profile (e.g. DSA like the keycaps that came with my Keybow 2040) but these discoveries taught me a sculpted profile is more practical for the keyboard I was making.
Another research purchase was a grab bag of random surplus keycaps, which is about as useless as you might expect: hundreds of keycaps, but too many duplicates to populate a keyboard. (My Keybow 2040 now has a very colourful mixture of miscellaneous keycaps.) The grab bag I got was mostly SA profile, which is tall and steeply angled on the near and far rows. In their normal orientation, SA function keys would probably not work so well on the right column of my keyboard, making a shape like a warehouse roof. Maybe they would work when rotated 90°? Dunno.
One of my old beliefs remained: I still prefer spherical indentations in the tops of my keycaps. They are more cuddly around my fingertips than the more common cylindrical indentations.
Annoyingly, many of the newer sculpted spherical keycap sets are hard to get hold of: often the only places that have them in stock will not ship to the UK at an affordable price. (For example.) Also annoyingly, the cheaper keycap sets almost never have the extras needed for unix69 compatibility. Bah.
The black-on-grey NightFox keycaps are Cherry profile (cylindrical indentations, sculpted rows, very short), and the keycaps that WASD printed for my HHKBeeb are OEM profile (like Cherry profile but taller). The HHKBeeb doesn’t have spherical keycaps because I don’t know anywhere that will do affordable one-off prints other than OEM profile. I also have a set of TEX ADA keycaps (uniform rows, short) which have lovely deeply scooped spherical tops, tho I am not a fan of their Helvetica legends.
So instead of a set of DSA keycaps (DIN height, spherical top, uniform) as I originally planned, I got DSS keycaps (DIN height, spherical top, sculpted). I love the Gorton Modified legends on Signature Plastics keycaps: as a business they descend from Comptec who made most BBC Micro keycaps.
I think Matt3o’s MTNU Susu keycaps are closer to my ideal, but I missed the group buy period and they have not been manufactured yet. And I wish they had an option for icons on the special keys instead of WORDS. I suspect the MTNU profile will become very popular, like Matt3o’s previous MT3 profile, so there will be chances to get some in the future.
A proper Unix keyboard layout must have escape next to 1 and control next to A.
Compared to the usual ANSI layout, backquote is displaced from its common position next to 1. But a proper Unix keyboard should cover the entire ASCII repertoire, 94 printing characters on 47 keys, plus space, in the main block of keys.
To make a place for backquote, we can move delete down a row so it is above return, and put backslash and backquote where delete was.
(Aside: the delete key emits the delete character, ASCII 127, and the return key emits the carriage return character, ASCII 13. That is why I don’t call them backspace and enter.)
This produces a layout similar to the main key block of Sun Type 3, Happy Hacking, and True Fox keyboard layouts.
Personally, I prefer compact keyboards so I don’t have to reach too far for the mouse, but I can’t do without arrow keys. So a 65% keyboard size (5 rows, 16 keys wide) is ideal.
If you apply the Unix layout requirements to a typical ANSI 68-key 65% layout, you get a 69-key layout. I call it unix69. (1969 was also the year Unix started.)
http://www.keyboard-layout-editor.com/#/gists/2848ea7a272aa571d140694ff6bbe04c
I have arranged the bottom row modifiers for Emacs: there are left and right meta keys and a right ctrl key for one-handed navigation. Meta is what the USB HID spec calls the “GUI” key; it sometimes has a diamond icon legend. Like the HHKB, and like Unix workstations made by Apple and Sun, the meta keys are either side of the space bar.
There are left and right fn keys for things that don’t have dedicated keys, e.g. fn+arrows for page up/page down, home, end. The rightmost column has user-programmable macro keys, which I use for window management.
http://www.keyboard-layout-editor.com/#/gists/6610c45b1c12f962e6cf564dc66f220b
ANSI 65% keyboards have caps lock where control should be.
They have an ugly oversized backslash and lack a good place for backquote.
The right column is usually wasted on fixed-function keys.
It’s common for 65% keyboards to have 67 or 68 keys, the missing key making a gap between the modifiers and arrow keys on the bottom row. I prefer to have more rather than fewer modifier keys.
http://www.keyboard-layout-editor.com/#/gists/f1742e8e1384449ddbb7635d8c2a91a5
Matteo Spinelli’s Whitefox / Nightfox “True Fox” layout has top 2 rows similar to unix69. It sometimes has backslash and backquote swapped.
Unfortunately it has caps lock where control should be. Its right column is wasted on fixed-function keys (though the keyboards are reprogrammable so it’s mainly a keycap problem).
On the bottom row, True Fox has two modifers and a gap between space and arrows, whereas unix69 has three modifiers and no gap.
http://www.keyboard-layout-editor.com/#/gists/c654dc6b4c7e30411cad8626302e309f
The Happy hacking keyboard layout is OK for a 60% Unix layout. However it lacks a left fn key, and lacks space for full-size arrow keys, so I prefer a 65% layout.
https://dotat.at/graphics/keybird69.jpg
Owing to the difficulty of getting keycaps with exactly the legends I would like, the meta keys on my keybird69 are labelled super and the delete key is labelled backspace. I used F1 to F4 keycaps for the macro keys, tho they are programmed to generate F13 to F16 which are set up as Hammerspoon hot keys.
But otherwise keybird69 is a proper unix69 keyboard.
Another keyboard!
A couple of years ago I made a BBC Micro tribute keyboard in the runup to the beeb’s 40th anniversary. I called it HHKBeeb:
The HHKBeeb is made from:
I planned to make a beeb-style acrylic sandwich case, but it was too hard to choose a place to get the acrylic cut, so that never happened.
In practice I find 60% keyboards (like the Happy Hacking Keyboard) too small – I need an arrow cluster. So I used the HHKBeeb with a Keybow 2040 macro pad to give me arrows and a few function keys for moving windows around.
My new keyboard is for a Finch and it has 69 keys, so it’s called Keybird69. (I was surprised that this feeble pun has not already been used by any of the keyboards known to QMK or VIA!)
It is made from:
A combination of reasons:
I have been mildly obsessed with compact keyboards practically forever, but back in the 1990s there were no good options available to buy, so I made do without.
The first small keyboard I liked was the (now discontinued) HHKB Lite
2,
which has an arrow cluster unlike the pure HHKB. I have a couple of
these lurking in the Boxes Of Stuff in the corner. But I’m not a huge
fan of the limited modifiers, or the Topre HHKB Lite 2 key
switches (they’re a bit mushy), or the styling of the HHKB case.
Correction: the HHKB Lite 2 did not actually use Topre switches.
I gradually used Macs more, and switched to using the Apple Aluminium keyboard - the model A1242 compact wired version, and the model A1314 wireless version. I also switched from a Kensington Expert Mouse trackball to an Apple Magic Trackpad.
But then Apple lost the plot with its input devices, so I thought I should plan to wean myself off. And in the mean time, the custom keyboard scene had flourished into a vibrant ecosystem of open source hardware and software.
So instead of relying on someone else to make a keyboard I like, I could make one myself! My own PCB and switch plate, designed for just the layout I want.
And with QMK open source firmware, I can make good use of the fn key that was so disappointingly unconfigurable on the HHKB and Apple keyboards.
I’m planning to write some more notes about various details of the design:
I got some interesting comments about my previous notes on random floating point numbers on Lobsters, Dreamwidth, and from Pete Cawley on Twitter.
Here’s an addendum about an alternative model of uniformity.
There are 2^62 double precision floats between 0.0 and 1.0, but as I described before under “the problem”, they are not distributed uniformly: the smaller ones are much denser. Because of this, there are two ways to model a uniform distribution using floating point numbers.
Both algorithms in my previous note use a discrete model: the functions return one of 2^52 or 2^53 evenly spaced numbers.
You can also use a continuous model, where you imagine a uniformly random real number with unbounded precision, and return the closest floating point result. This can have better behaviour if you go on to transform the result to model different distrbutions (normal, poisson, exponential, etc.)
Taylor Campbell explains how to generate uniform random double-precision floating point numbers with source code. Allen Downey has an older description of generating pseudo-random floating-point values.
In practice, the probability of entering the arbitrary-precision loop in Campbell’s code is vanishingly tiny, so with some small adjustments it can be omitted entirely. Marc Reynolds explains how to generate higher density uniform floats this way, and Pete Cawley has terse implementations that use one or two random integers per double. (Reynolds also has a note about adjusting the range and fenceposts of discrete random floating point numbers.)
Here are a couple of algorithms for generating uniformly distributed
floating point numbers 0.0
<=
n <
1.0
using an unbiased
random bit generator and IEEE 754 double precision arithmetic. Both of
them depend on details of how floating point numbers work, so before
getting into the algorithms I’ll review IEEE 754.
The first algorithm uses bit hacking and type punning. The second uses a hexadecimal floating point literal. They are both fun and elegant in their own ways, but these days the second one is better.
Last week I was interested to read about the proposed math/rand/v2
for Golang’s standard library. It mentioned a new-ish flavour
of PCG random number generator which I had not previously encountered,
called PCG64 DXSM. This blog post collects what I have learned about
it. (I have not found a good summary elsewhere.)
At the end there is source code for PCG64 DXSM that you can freely copy and use.
I am pleased that so many people enjoyed my talk about time at RIPE86. I thought I would write a few notes on some of the things I left out.
This week I was in Rotterdam for a RIPE meeting. On Friday morning I gave a lightning talk called where does my computer get the time from? The RIPE meeting website has a copy of my slides and a video of the talk; this is a blogified version, not an exact transcript.
I wrote a follow-up note, “Where does ‘where does my computer get the time from?’ come from?” about some things I left out of the talk.
This weekend I was in Rotterdam for the RIPE DNS Hackathon.
About 50 people gathered with several ideas for potential projects: things like easier DNSSEC provisioning, monitoring DNS activity in the network, what is the environmental cost of the DNS, …
At the start of the weekend we were asked to introduce ourselves and say what our goals were. My goal was to do something different from my day job working on BIND. I was successful, tho I did help some others out with advice on a few of BIND’s obscurities.
The team I joined was very successful at producing a working prototype and a cool demo.
In 2021, I came up with a design for a new memory layout for a qp-trie, and I implemented a prototype of the design in NLnet Labs NSD (see my git repo or github).
Since I started work at ISC my main project has been to adapt the NSD prototype into a qp-trie for use in BIND. The ultimate aim is to replace BIND’s red-black tree database, its in-memory store of DNS records.
Yesterday I merged the core qp-trie implementation into BIND so it’s a good time to write some blog notes about it.
The core of the design is still close to what I sketched in 2021 and implemented in NSD, so these notes are mostly about what’s different, and the mistakes I made along the way…
Chris Wellons posted a good review of why large chunks of the C
library are terrible,
especially if you are coding on Windows - good fun if you like staring
into the abyss. He followed up with let’s write a
setjmp which is fun in a
more positive way. I was also pleased to learn about
__builtin_longjmp
! There’s a small aside in this article about the
signal mask, which skates past another horrible abyss - which might
even make it sensible to DIY longjmp
.
Some of the nastiness can be seen in the POSIX rationale for
sigsetjmp
which says that on BSD-like systems, setjmp
and
_setjmp
correspond to sigsetjmp
and setjmp
on System V Unixes.
The effect is that setjmp
might or might not involve a system call
to adjust the signal mask. The syscall overhead might be OK for
exceptional error recovery, such as Chris’s arena out of memory
example, but it’s likely to be more troublesome if you are
implementing coroutines.
But why would they need to mess with the signal mask? Well, if you are
using BSD-style signals or you are using sigaction
correctly, a
signal handler will run with its signal masked. If you decide to
longjmp
out of the handler, you also need to take care to unmask the
signal. On BSD-like systems, longjmp
does that for you.
The problem is that longjmp
out of a signal handler is basically
impossible to do correctly. (There’s a whole flamewar in the wg14
committee documents on this subject.) So this is another example of
libc being optimized for the unusual, broken case at the cost of the
typical case.
The other day, Paul McKenney posted an article on LiveJournal about different flavours of RCU, prompted by a question about couple of Rust RCU crates. (There are a few comments about it on LWN.)
McKenney goes on to propose an RCU classification system based on the API an implementation provides to its users. (I am curious that the criteria do not involve how RCU works.)
Here’s how I would answer the questions for QSBR in BIND:
Are there explicit RCU read-side markers?
No, it relies on libuv
callbacks to bound the lifetime of a
read-side critical section.
Are grace periods computed automatically?
Yes. There is an internal isc__qsbr_quiescent_state()
function,
but that mainly exists to separate the QSBR code from the event
loop manager, and for testing purposes, not for use by
higher-level code.
Are there synchronous grace-period-wait APIs?
No. (Because they led me astray when designing a data structure to use RCU.)
Are there asynchronous grace-period-wait APIs?
Yes, but instead of one-shot call_rcu()
, a subsystem (such as
the qp-trie code) registers a permanent callback
(isc_qsbr_register()
), and notifies the QSBR when there is work
for the callback to do (isc_qsbr_activate()
). This avoids having
to allocate a thunk on every modification, and it automatically
coalesces reclamation work.
If so, are there callback-wait APIs?
No. At the moment, final cleanup work is tied to event loop teardown.
Are there polled grace-period-wait APIs?
No.
Are there multiple grace-period domains?
One per event loop manager, and there’s only one loopmgr
.
Previously, I wrote about implementing safe memory reclamation for my qp-trie code in BIND. I have now got it working with a refactored qp-trie that has been changed to support asynchronous memory reclamation - working to the point where I can run some benchmarks to compare the performance of the old and new versions.
Previously, I wrote about my cataract and its assessment at Addenbrooke’s cataract clinic.
I had my cataract removed a couple of weeks ago, and so far things are going well, though there is still some follow-up work needed.
At the end of October, I finally got my multithreaded qp-trie working! It could be built with two different concurrency control mechanisms:
A reader/writer lock
This has poor read-side scalability, because every thread is hammering on the same shared location. But its write performance is reasonably good: concurrent readers don’t slow it down too much.
liburcu
, userland read-copy-update
RCU has a fast and scalable read side, nice! But on the write side
I used synchronize_rcu()
, which is blocking and rather slow, so
my write performance was terrible.
OK, but I want the best of both worlds! To fix it, I needed to change
the qp-trie code to use safe memory reclamation more effectively:
instead of blocking inside synchronize_rcu()
before cleaning up, use
call_rcu()
to clean up asynchronously. I expect I’ll write about the
qp-trie changes another time.
Another issue is that I want the best of both worlds by default,
but liburcu
is LGPL and we don’t want BIND to depend on
code whose licence demands more from our users than the MPL.
So I set out to write my own safe memory reclamation support code.
In a previous entry, I wrote about making DNS name decompression faster by moving work left on this diagram so that we do less of it:
names < pointers < labels < bytes
Last week I had a bright idea about that leftmost step, moving per-pointer work to per-name, using some dirty tricks. Sadly the experiment was not successful, because it also increased the per-label work. Nevertheless I think it’s interesting enough to be worth writing about.
This year I have rewritten BIND’s DNS name compression and decompression code. I didn’t plan to, it just sort of happened! Anyway, last week my colleague Petr was doing some benchmarking, and he produced some numbers that seemed too good to be true, so I have re-done the measurement myself, and wow.
It has been a couple of years since my previous blog post about leap seconds, though I have been tweeting on the topic fairly frequently: see my page on date, time, and leap seconds for an index of threads. But Twitter now seems a lot less likely to stick around, so I’ll aim to collect more of my thinking-out-loud here on my blog.