.@ Tony Finch – blog


Semaphores are one of the oldest concurrency primitives in computing, invented over 60 years ago. They are weird: usually the only numbers of concurrent processes we care about are zero, one, or many – but semaphores deal with those fussy finite numbers in between.

Yesterday I was writing code that needed to control the number of concurrent operating system processes that it spawned so that it didn’t overwhelm the computer. One of those rare situations when a semaphore is just the thing!

a Golang channel is a semaphore

A Golang channel has a buffer size – a number of free slots – which corresponds to the initial value of the semaphore. We don’t care about the values carried by the channel: any type will do.

    var semaphore := make(chan any, MAXPROCS)

The acquire operation uses up a slot in the channel. It is traditionally called P(), and described as decrementing the value of the semaphore, i.e. decrementing the number of free slots in the channel. When the channel is full this will block, waiting for another goroutine to release the semaphore.

    func acquire() {
        semaphore <- nil
    }

The release operation, traditionally called V(), frees a slot in the channel, incrementing the value of the semaphore.

    func release() {
        <-semaphore
    }

That’s it!

the GNU make jobserver protocol is a semaphore

The GNU make -j parallel builds feature uses a semaphore in the form of its jobserver protocol. Occasionally, other programs support the jobserver protocol too, such as Cargo. BSD make -j uses basically the same semaphore implementation, but is not compatible with the GNU make jobserver protocol.

The make jobserver semaphore works in a similar manner to a Golang channel semaphore, but:

Here’s a C-flavoured sketch of how it works. To create a semaphore and initialize its value, create a pipe and write that many characters to it, which are buffered in the kernel:

    int fd[2];
    pipe(fd);

    char slots[MAXPROCS] = {0};
    write(fd[1], slots, sizeof(slots));

To acquire a slot, read a character from the pipe. When the pipe is empty this will block, waiting for another process to release the semaphore.

    char slot;
    read(fd[0], &slot, 1);

To release a slot, the worker must write the same character back to the pipe:

    write(fd[1], &slot, 1);

Error handling is left as an exercise for the reader.

bonus: waiting for concurrent tasks to complete

If we need to wait for everything to finish, we don’t need any extra machinery. We don’t even need to know how many tasks are still running! It’s enough to acquire all possible slots, which will block until the tasks have finished, then release all the slots again.

    func wait() {
        for range MAXPROCS {
            acquire()
        }
        for range MAXPROCS {
            release()
        }
    }

That’s all for today! Happy hacking :-)


a blog post for international RNG day

Lemire’s nearly-divisionless algorithm unbiased bounded random numbers has a fast path and a slow path. In the fast path it gets a random number, does a multiplication, and a comparison. In the rarely-taken slow path, it calculates a remainder (the division) and enters a rejection sampling loop.

When Lemire’s algorithm is coupled to a small random number generator such as PCG, the fast path is just a handful of instructions. When performance matters, it makes sense to inline it. It makes less sense to inline the slow path, because that just makes it harder for the compiler to work on the hot code.

Lemire’s algorithm is great when the limit is not a constant (such as during a Fisher-Yates shuffle) or when the limit is not a power of two. But when the limit is a constant power of two, it ought to be possible to eliminate the slow path entirely.

What are the options?

read more ...


I have made a new release of nsnotifyd, a tiny DNS server that just listens for NOTIFY messages and runs a script when one of your zones changes.

This nsnotifyd-2.1 release includes a few bugfixes:

Many thanks to Lars-Johann Liman, JP Mens, and Jonathan Hewlett for the bug reports. I like receiving messages that say things like,

thanks for nsnotifyd, is a great little program, and a good example of a linux program, does one thing well.

(There’s more like that in the nsnotifyd-2.0 release annoucement.)

I have also included a little dumpaxfr program, which I wrote when fiddling around with binary wire format DNS zone transfers. I used the nsnotifyd infrastructure as a short cut, though dumpaxfr doesn’t logically belong here. But it’s part of the family, so I wrote a dumpaxfr(1) man page and included it in this release.

I will be surprised if anyone else finds dumpaxfr useful!


Yesterday I received a bug report for regpg, my program that safely stores server secrets encrypted with gpg so they can be commited to a git repository.

The bug was that I used the classic shell pipeline find | xargs grep with the classic Unix “who would want spaces in filenames?!” flaw.

I have pushed a new release, regpg-1.12, containing the bug fix.

There’s also a gendnskey subcommand which I used when doing my algorithm rollovers a few years ago. (It’s been a long time since the last regpg release!) It’s somewhat obsolete, now I know how to use dnssec-policy.

A bunch of minor compatibility issues have crept in, which mostly required fixing the tests to deal with changes in Ansible, OpenSSL, and GnuPG.

My most distressing discovery was that Mac OS crypt(3) still supports only DES. Good grief.


There are a couple of version control commands that deserve wider appreciation: SCCS what and RCS ident. They allow you to find out what source a binary was built from, without having to run it – handy if it is a library! They basically scan a file looking for magic strings that contain version control metadata and print out what they discover.

read more ...


Here are some miscellaneous unsorted notes about BIND9’s dnssec-policy that turned out not to be useful in my previous blog posts, but which some readers might find informative. Some of them I learned the hard way, so I hope I can make it easier for others!

read more ...


Here are some notes on migrating a signed zone from BIND’s old auto-dnssec to its new dnssec-policy.

I have been procrastinating this migration for years, and I avoided learning anything much about dnssec-policy until this month. I’m writing this from the perspective of a DNS operator rather than a BIND hacker.

read more ...


Here are some notes about using BIND’s new-ish dnssec-policy feature to sign a DNS zone that is currently unsigned.

I am in the process of migrating my DNS zones from BIND’s old auto-dnssec to its new dnssec-policy, and writing a blog post about it. These introductory sections grew big enough to be worth pulling out into a separate article.

read more ...


As is typical for static site generators, each page on this web site is generated from a file containing markdown with YAML frontmatter.

Neither markdown nor YAML are good. Markdown is very much the worse-is-better of markup languages; YAML, on the other hand, is more like better-is-worse. YAML has too many ways of expressing the same things, and the lack of redundancy in its syntax makes it difficult to detect mistakes before it is too late. YAML’s specification is incomprehensible.

But they are both very convenient and popular, so I went with the flow.

multiple documents

A YAML stream may contain several independent YAML documents delimited by --- start and ... end markers, for example:

    ---
    document: 1
    ...
    ---
    document: 2
    ...

string documents

The top-level value in a YAML document does not have to be an array or object: you can use its wild zoo of string syntax too, so for example,

    --- |
    here is a preformatted
    multiline string

frontmatter and markdown

Putting these two features together, the right way to do YAML frontmatter for markdown files is clearly,

    ---
    frontmatter: goes here
    ...
    --- |
    markdown goes here

The page processor can simply:

No need for any ad-hoc hacks to separate the two parts of the file: the YAML acts as a lightweight wrapper for the markdown.

markdown inside YAML

The crucial thing that makes this work is that the markdown after the --- | delimiter does not need to be indented.

Markdown is very sensitive to indentation, so all the tooling (most importantly my editor) gets righteously confused if markdown is placed in a container that introduces extra indentation.

YAML in Perl

The static site generator for www.dns.cam.ac.uk uses --- | to mark the start of the markdown in its source files. This worked really nicely.

The web site was written in Perl, because most of the existing DNS infrastructure was Perl and I didn’t want to change programming languages. YAML was designed by Perl hackers, and the Perl YAML modules are where it all went wrong started.

YAML in other languages

The static site generator for https://dotat.at is written in Rust, using serde-yaml.

I soon discovered that, unlike the original YAML implementations, serde-yaml requires top-level strings following --- | to be indented. This bug seems to be common in YAML implementations for languages other than Perl.

start and end delimiters

So I changed the syntax for my frontmatter so it looks like,

    ---
    frontmatter: goes here
    ...
    markdown goes here

That is, the file starts with a complete YAML document delimited by --- start and ... end markers, and the rest of the file is the markdown.

The idea is that a page processor should be able to:

However, I could not work out how to get serde-yaml to read just the prefix of a file successfully and return the remainder for further processing.

I know, I’ll use regexps

(Might as well, I’m already way past two problems…)

As a result I had to add a bodge to the page processor:

mainstream frontmatter

My choice to mark the end of the frontmatter with the YAML ... end delimiter is not entirely mainstream. As I understand it, the YAML + markdown convention came from Jekyll, or at least Jekyll popularized it. Jekyll uses the YAML --- start delimiter to mark the end of the YAML, or maybe to mark the start of the markdown, but either way it doesn’t make sense.

Fortunately my ... bodge is compatible with Pandoc YAML metadata, and Emacs markdown mode supports Pandoc-style YAML metadata, so the road to hell is at least reasonably well paved.

grump

It works, but it doesn’t make me happy. I suppose I deserve the consequences of choosing technology with known deficiencies. But it requires minimal effort, and is by and large good enough.


My opinion is not mainstream, but I think if you really examine the practices and security processes that use and recommend sudo, the reasons for using it are mostly bullshit.

read more ...


Our net connection at home is not great: amongst its several misfeatures is a lack of IPv6. Yesterday I (at last!) got around to setting up a wireguard IPv6 VPN tunnel between my workstation and my Mythic Beasts virtual private server.

There were a few, um, learning opportunities.

read more ...


After an extremely long hiatus, I have resurrected my link log.

As well as its web page, https://dotat.at/:/, my link log is shared via:

The Dreamwidth feed has not caught this afternoon’s newly added links, so I am not sure if it is actually working…

There is a lengthy backlog of links to be shared, which will be added to the public log a few times each day.

The backlog will be drained in a random order, but the link log’s web page and atom feed are sorted in date order, so the most-recently shared link will usually not be the top link on the web page.

I might have to revise how the atom feed is implemented to avoid confusing feed readers, but this will do for now.

The code has been rewritten from scratch in Rust, alongside the static site generator that runs my blog. It’s less of a disgusting hack than the ancient Perl link log that grew like some kind of fungus, but it still lacks a sensible database and the code is still substantially stringly typed. But, it works, which is the important thing.

edited to add …

I’ve changed the atom feed so that newly-added entries have both a “published” date (which is the date displayed in the HTML, reflecting when I saved the link) plus an “updated” date indicating when it was later added to the public log.

I think this should make it a little more friendly to feed readers.


Back in December, George Michaelson posted an item on the APNIC blog titled “That OSI model refuses to die”, in reaction to Robert Graham’s “OSI Deprogrammer” published in September. I had discussed the OSI Deprogrammer on Lobsters, and George’s blog post prompted me to write an email. He and I agreed that I should put it on my blog, but I did not get a round tuit until now…

The main reason that OSI remains relevant is Cisco certifications require network engineers to learn it. This makes OSI part of the common technical vocab and basically unavoidable, even though (as Rob Graham correctly argues) it’s deeply unsuitable.

It would be a lot better if the OSI reference model were treated as a model of OSI in particular, not a model of networking in general, as Jesse Crawford argued in 2021. OSI ought to be taught as an example alongside similar reference models of protocol stacks that are actually in use.

One of OSI’s big problems is how it enshrines layering as the architectural pattern, but there are other patterns that are at least as important:

Speaking of Ethernet, it’s very poorly served by the OSI model. Ethernet actually has three layers:

Then there’s WiFi which looks like Ethernet from IP’s point of view, but is even more complicated. And almost everything non-ethernet has gone away or been adapted to look more like ethernet…

Whereas OSI has too few lower layers, it has too many upper layers: its session and presentation layers don’t correspond to anything in the Internet stack. I think Rob Graham said that they came from IBM SNA, and were related to terminal-related things like simplex or block-mode, and character set translation. Jack Haverty said something similar on the Internet History mailing list in 2019. The closest the ARPANET / Internet protocols get is Telnet’s feature negotiation; a lot of the problem solved by the OSI presentation layer is defined away by the Internet’s ASCII-only network virtual terminal. Graham also said that when people assign Internet functions to layers 5 and 6, they do so based on the names not based on how the OSI describes what they do.

One of the things that struck me when reading Mike Padlipsky’s Elements of Networking Style is the amount of argumentation that was directed at terminal handling back then. I guess in that light it’s not entirely surprising that OSI would dedicate two entire layers to the problem.

Padlipsky also drew the ARPANET layering as a fan instead of a skyscraper, with intermediate layers shared by some but not all higher-level protocols, e.g. the NVT used by Telnet, FTP, SMTP. I expect if he were drawing the diagram later there might be layers for 822 headers, MIME, SASL – though they are more like design patterns than layers since they get used rather differently by SMTP, NNTP, HTTP. The notion of pick-and-mix protocol modules seems more useful than fixed layering.

Anyway, if I could magically fix the terminology, I would prefer network engineers to talk about specific protocols (e.g. ethernet, MPLS) instead of bogusly labelling them as layers (e.g. 2, 2.5). If they happen to be working with a more diverse environment than usual (hello DOCSIS) then it would be better to talk about sub-IP protocol stacks. But that’s awkwardly sesquipedalian so I can’t see it catching on.


In my previous entry I wrote about constructing a four-point egg, using curcular arcs that join where their tangents are at 45°. I wondered if I could do something similar with ellipses.

As before, I made an interactive ellipse workbench to experiment with the problem. I got something working, but I have questions…

a screenshot of the ellipse workbench

read more ...


For reasons beyond the scope of this entry, I have been investigating elliptical and ovoid shapes. The Wikipedia article for Moss’s egg has a link to a tutorial on Euclidean Eggs by Freyja Hreinsdóttir which (amongst other things) describes how to construct the “four point egg”. I think it is a nicer shape than Moss’s egg.

read more ...


Another recent food obsession!

I think the instigation was a YouTube food video which led me to try making popcorn at home from scratch with Nico. It was enormous fun! And several weeks later it’s still really entertaining to make (especially when a stray kernel pops after I take the lid off the pan, catapaulting a few pieces in random directions!)

Swedish Chef popcorn bucket

Turn on the captions while watching the YouTube vid!

read more ...


The Novelkeys Kailh Big Switch is a working MX-style mechanical keyboard switch, but 4x larger in every dimension.

big switch little switch

I realised at the weekend that the Big Switch should fit nicely in a simple Lego enclosure. Because an MX-style switch is usually mounted in a 14x14 mm square plate cutout, at 4x larger the Big Switch would need a 56x56 mm mounting hole. Lego aficionados know that studs are arranged on an 8x8 mm grid; this means the Big Switch hole is exactly 7x7 studs. A plan was hatched and a prototype was made.

read more ...


In recent weeks I have been obsessed with carbonara: I have probably been eating it far too frequently. Here’s my recipe. It works well for 1 - 3 people but gets unweildy at larger quantities.

ingredients

Rough quantities per person:

method


I’m a beginner at PCB design, or rather, I haven’t made a PCB since I was at school 30 years ago, and a lot has changed since then! So my aim for Keybird69’s PCB was to learn my way around the design, manufacturing, and assembly process.

a picture of the waveshare rp2040-tiny microcontroller and usb boards, a kailh hotswap socket, a key switch, a blue alt keycap, and a batman lego minifigure for scale (this is the article's hero image)

read more ...


My Keybird69 uses LEGO in its enclosure, in an unconventional way.

story time

Two years ago I planned to make a typical acrylic sandwich case for HHKBeeb, in the style of the BBC Micro’s black and yellowish beige case. But that never happened because it was too hard to choose a place to get the acrylic cut so my spec.

My idea for using LEGO in a keyboard case was originally inspired by James Munns, who uses LEGO for mounting PCBs, including at least one keyboard.

Howver, I could not work out how to make a case that is nice and slender and into which the parts would fit. It is possible – the KBDcraft Adam solves the problem nicely, and by all reports it’s pretty good as a keyboard, not just a gimmick.

To make the PCB design easier, I am using a Waveshare RP2040-Tiny. It’s more flexible than the usual dev boards used in custom keyboards because it has a separate daughterboard for the USB socket, but I had the devil of a time working out how to make it fit with LEGO.

brainwaves

read more ...