Debian-installer, mdadm configuration and the Bad Blocks Controversy

Updates! ^

Since this was posted on 2020-09-13 there was some interest in the comments and on Hacker News and I learned some things which required updates. I’ve tried to indicate them with struck out text.

Of particular note is the re-add method of removing BBLs.

MD and mdadm ^

MD is the Linux kernel driver that is used for running software RAID arrays. mdadm is the software that you run to manage MD devices. They are both part of the same project.

First, about the Bad Blocks List ^

Since about 2010, MD has had a bad blocks log (BBL) feature. When it fails to read from an underlying device it will (sometimes?) mark that block as bad and read the correct data from a different device, and then forever more redirect reads away from those bad blocks. This feature defaults to being on.

One problem with this feature is that read errors can occur for many reasons besides permanent failure of part of a storage device. For example, it could be a failure of the backplane or controller that causes many read errors on multiple devices, or the devices could be reached over a network of some sort and temporary network problems could propagate errors.

Even if the particular part of the device is unreadable, the operating system is supposed to try to write the correct data over the top. This write will either clear the problem or else be redirected to a spare sector on the drive by the drive’s firmware. The operating system is not supposed to be taking on this role, the drives are, and when the drives fail to do so then the redundancy of the array is supposed to save the day.

Even worse, there are apparently bugs somewhere in the BBL code that cause a device’s BBL to be copied onto a new device when the array is rebuilt or a device replaced. Clearly it does not make sense for a new device to get a copy of another device’s BBL because they are inherently a per-device thing. So far there has been no successful intentional reproduction of this, only people unwittingly hitting it at the worst possible moments. It has been reproduced that adding or replacing a device results in a BBL being copied. I am not aware of a formal bug report for this yet.

mdadm doesn’t even try particularly hard to warn you if a new bad block is found. Unlike when a device fails, it doesn’t send you an email. The MD driver writes in the syslog about the bad block(s). There’s also no change to /proc/mdstat. You have to examine some files in sysfs.

As a result the current situation is that:

No one seems to have made any progress on fixing any of this in 10 years.

Doing something about it ^

I’ll say right now that this story doesn’t (yet?) have a satisfying ending.

I’ve been aware of the “Bad Blocks Controversy” for about 5 years but I haven’t ever personally experienced any problems and it was always at the bottom of my list to look at. Roy’s recent thread spurred me into deciding that in future no MD array I created would have a BBL.

I also took the opportunity to deploy Sarah Newman’s Ansible role which checks all array components have an empty BBL. None of BitFolk‘s array components currently have any entries in their BBLs – phew!

Removing an existing BBL ^

Currently the only way to remove a BBL from an array component is to stop the array and then assemble it with an argument like this:

There are two ways to remove the BBL from the devices of existing arrays.

Fail and re-add each device with update

It doesn’t seem to be documented anywhere, but you can fail a device out of an array and re-add it with an update to remove the BBL on that device, like this:

# mdadm --fail /dev/md0 /dev/sdb1 \
        --remove /dev/sdb1 \
        --re-add /dev/sdb1 \
        --update=no-bbl
mdadm: set /dev/sdb1 faulty in /dev/md0                                              
mdadm: hot removed /dev/sdb1 from /dev/md0                   
mdadm: re-added /dev/sdb1

This will only work if your array has a bitmap, otherwise it will refuse to re-add. Most arrays do get a bitmap, but small arrays won’t by default. Fortunately you can easily add a bitmap like this:

# mdadm --grow --bitmap=internal /dev/md0

The downside of this approach is that your array will have reduced redundancy while it rebuilds. It should rebuild pretty quickly though as the bitmap will cause only changed parts to be rewritten.

(This won’t work if a BBL currently has any entries)

Stop the array and assemble again with update

The other way to remove BBL from devices is to stop the array and assemble it manually like this:

# mdadm --assemble /dev/mdX --update=no-bbl

The big problem with this is that stopping the array obviously causes downtime for whatever is using it. If your root filesystem is on an MD array (and why wouldn’t it be, if you use MD?) then that means the entire server, and you’re having to do this from sort of rescue environment.

I have suggested that a config option be added to remove a BBL on assembly, so that this will happen the next time the machine is rebooted. This does not appear to have provoked any interest.

This method is quicker since it operates on all devices and doesn’t require a rebuild, but personally I usually find downtime more painful so I’d be inclined to schedule an “at-risk” maintenance window and do it the re-add way.

Avoiding the BBL at creation time ^

So if the BBL cannot be easily removed, at least it can be prevented from ever existing, right? When Neil Brown, the previous MD maintainer, was asked in 2016 if the feature could be defaulted to off, Neil said that putting this in the config file was as good as that:

CREATE bbl=no

The thing is, it’s not as good as disabling it by default when you consider what many users’ experience is of running the mdadm command: they don’t run mdadm, something else runs it for them. I’d go as far as to say that the majority of uses of mdadm are done by helper scripts and installers, not by human beings.

If it’s a program that is running mdadm for you then you are going to have to find out how to set that mdadm.conf before it reads it.

Take for example my own process of installing Debian. I do it by booting the Debian Installer by PXE. I have some pre-seeding done to answer a lot of the installer questions, but actually I do still do the disk partitioning stage in the installer’s text interface.

So there I was thinking this is actually going to be quite simple, because the Debian Installer is really lovely about letting you execute a shell and poke around. Surely all I am going to need to do is open a shell once and edit /etc/mdadm/mdadm.conf and then go back into the mdcfg menu and carry on, right? Oh dear me no.

You can read the details of my wild ride that involved me uploading a binary of strace into the d-i to run mdadm under to work out what was going on, but just the relevant discoveries are in this article for those who’d rather not.

mdadm in d-i uses a config file at /tmp/mdadm.conf

After quite a bit of confusion over why even arrays I created manually with the mdadm command in the d-i shell still had a BBL, I discovered that the mdadm binary in d-i is compiled to have its config at /tmp/mdadm.conf. I don’t know why, but probably there is a good reason.

(At this point a number of people responded, “that’s because everything else will be set read-only.” That’s not the case with debian-installer which runs entirely off of a tmpfs. It’s all writeable.)

So just make the edit to /tmp/mdadm.conf then?

Oh ho ho no. Every time you go into the MD configuration section (mdcfg) it clobbers its own /tmp/mdadm.conf, and you can’t get to the “execute a shell” option without returning to the MD configuration section.

If you’re on something with multiple virtual consoles (like if you’re sitting in front of a conventional PC) then you could switch to one of those after you’ve entered the MD configuration part and modify /tmp/mdadm.conf then. I don’t have that option because I’m on a serial console.

I thought I didn’t have that option because I’m on a serial console, but it was pointed out to me that when the Debian installer detects it’s running in a serial console it runs itself under GNU Screen. So, by using the usual screen commands of ctrl+a n or ctrl+a p, one can switch backwards and forwards through the different virtual consoles. Neat!

There is also an earlier option to load an installer component that enables one to continue the installation process over SSH. If you select that then you can SSH in to the running installer system so if you do that after you’ve entered the MD configuration bit in your main console then I guess you can then edit the config file and continue.

By one of those methods of getting a shell, after you’ve already entered the array configuration part but before you’ve actually created any arrays, I think you could edit /tmp/mdadm.conf to have “CREATE bbl=no” and the installer’s mdadm binary would respect that when you switch back.

Alternatively you could just use the shell to create your arrays instead of using the Ddebian installer to do it. If it’s a simple case where you’ve just got an sda and an sdb disk identically partitioned and you want to make a bunch of arrays on them, it can be a fairly legible shell session like:

~ # mkdir -vp /etc/mdadm && echo "CREATE bbl=no" > /etc/mdadm/mdadm.conf
~ # for part in 1 2 3 5; do \
      mdadm --create \
            -v \
            --config=/etc/mdadm/mdadm.conf \
            /dev/md${part} \
            --level=1 \
            --raid-devices=2 \
            /dev/sd[ab]${part}; \
    done

Do not try this until you understand exactly what it is doing.

It iterates the list 1, 2, 3, 5 (I use the 4th partition for something else) and makes arrays called mdX out of sdaX and sdbX. The mdadm binary is forced to use our config file that disables creation of a BBL.

You can verify that a BBL does not exist on any of the array components like this:

~ # mdadm --examine-badblocks /dev/sda1
No bad-blocks list configured on /dev/sda1

You should get identical output for every component. If a component did have a BBL it would output something like this:

~ # mdadm --examine-badblocks /dev/sda1
Bad-blocks list is empty in /dev/sda1

You can then exit the d-i shell and go back to the disk partitioning section. You won’t need the MD configuration part now but even if you do go into it, it should detect all your manually-created arrays.

How to make progress? ^

All of this isn’t great but at least it’s fairly easy to pause the Debian installer and take some manual action. I suspect users of other Linux distributions may not be so lucky, and so I too think it would be a good idea if this buggy feature was disabled by default, or at least if there were a way to tell mdadm to remove the BBL on assembly.

In fact I would very much like to be able to tell it to remove the BBL on assembly so that I can disable the BBL feature on all my existing servers.

mdadm actually gets called by udev from inside the initramfs in incremental assembly mode, so I think the incremental assembly code needs to look in the config file for this “remove all the BBLs” directive and do it then during assembly as if update=no-bbl had been specified on a command line.

It should be possible to write a script that:

  1. Looks in /sys/block/md* to find device components of all arrays.
  2. Checks each one to see if it has a BBL.
  3. If any are found, add a bitmap if necessary.
  4. Do the fail/remove/re-add trick on each one in turn, waiting for the array to go back into sync each time.

i.e. it should be possible to automate this and run it at the end of an install so the entire install process can remain automated, or run it on a host any time after it’s been provisioned.

18 thoughts on “Debian-installer, mdadm configuration and the Bad Blocks Controversy

  1. I had a thought: has anyone ever tried hacking their udev rules to use “–update=no-bbl” on the incremental assembly line? I have no idea if it respects that option in that mode, but if it did it would be a one-time edit of the udev rule file and then an update-initramfs to get it in there.

  2. First of all, thanks for this article. It’s always nice to feel a wee bit less alone in the world 🙂

    “Even worse, there are apparently bugs somewhere in the BBL code that cause a device’s BBL to be copied onto a new device when the array is rebuilt or a device replaced. Clearly it does not make sense for a new device to get a copy of another device’s BBL because they are inherently a per-device thing. So far there has been no successful intentional reproduction of this, only people unwittingly hitting it at the worst possible moments.”

    This is not correct. I wrote the initial post to which you’re referencing in the text and at one point, I thought, well, I have a new drive so let’s see what happens. It obviously can’t make things worse. It didn’t, but in that RAID-6, it did replicate the BBL from the original drives having it. Apparently this has happened a lot of times. The raid has been around for some years.

    “Currently the only way to remove a BBL from an array component is to stop the array and then assemble it with an argument like this:

    # mdadm –assemble /dev/mdX –update=no-bbl”

    That won’t work if your array already has entries in the BBL. Then mdadm will just fail that and you’ll have to add a –force

    But again – thanks for this.

    1. Thanks for the correction. I will edit the article. Was this a rebuild replacing a failed/removed device, or a grow with an extra device?

      I did know about not being able to remove a BBL with entries in, I just thought that was obvious. None of my BBLs have entries in so it wasn’t really my concern. I will clarify.

        1. How do you mean? All my BBLs are empty. I’ve worked out why the re-add wasn’t working for me now: I was trying it on an array without a bitmap. Need to re-write parts of this article now because the re-add trick is reasonable to do from the installer and can even be scripted.

          1. I just tried this with an array with drives which all (or most) have a bbl with 5-10 entries each. Seems it won’t remove

            root@smilla:~# mddev=/dev/md0
            root@smilla:~# diskdev=/dev/sdh
            root@smilla:~# mdadm –fail $mddev $diskdev –remove $diskdev –re-add $diskdev –update=no-bbl
            mdadm: set /dev/sdh faulty in /dev/md0
            mdadm: hot removed /dev/sdh from /dev/md0
            mdadm: Cannot remove active bbl from /dev/md0
            mdadm: re-added /dev/sdh

            But

            root@smilla:~# mdadm –fail $mddev $diskdev –remove $diskdev –re-add $diskdev –update=force-no-bbl
            root@smilla:~# mdadm –examine-badblocks $diskdev
            No bad-blocks list configured on /dev/sdh

            So it seems it can be done 🙂

    2. > That won’t work if your array already has entries in the BBL. Then mdadm will just fail that and you’ll have to add a –force

      FWIW, I ran had to run `mdadm –assemble /dev/mdX –update=force-no-bbl` as `–update=no-bbl` raised an error. This option is not documented (at least not in the manpage).

      Also, it seems that you can use `–update=no-bbl` with `–manage –re-add`, this might be a solution to avoid downtime, I don’t know (actually, I’d rather reboot to a rescue system and update the array there).

      @Andy, the font size of the textarea to write a comment is super small and it’s really hard to read what you’ve written.

      1. Both those things already mentioned in the article. As you can see, re-add did not work for me, still don’t know why. Will look into the comment box font, thanks!

      1. It seems fine. You may want to mention that re-add isn’t possible unless the array has a bitmap, as small arrays do not get a bitmap added by default. This was why re-add wasn’t working for me.

        You can add a bitmap easily and without downtime like:

        # mdadm --grow --bitmap=internal /dev/md0

        Also for those worrying that the fail+re-add will leave the array without redundancy for a long time, it does only resync the parts that the bitmap says need it so it typically completes vary fast unless your array has huge write load, probably before you even get chance to look at /proc/mdstat.

        1. I’m aware of that and I’ll make a note about it, but the bitmap has been on by default for some years now, so I don’t think too many out there that’ll be affected by this.

          1. Well, Debian buster systems that I’ve installed in the last week do not get bitmaps by default on their small arrays, only on their larger ones, which is what was confusing me about re-add not working.

            $ cat /proc/mdstat 
            Personalities : [raid1] [raid10] 
            md3 : active raid10 sda5[0] sdb5[1]
                  1871812608 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
                  bitmap: 14/14 pages [56KB], 65536KB chunk
             
            md2 : active raid10 sda3[0] sdb3[1]
                  975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
             
            md1 : active raid10 sda2[0] sdb2[1]
                  1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
             
            md0 : active raid1 sda1[0] sdb1[1]
                  498368 blocks super 1.2 [2/2] [UU]
             
            unused devices: <none>

            Note that only md3 has a bitmap. Is this a peculiarity of the debian-installer or does mdadm not use a bitmap by default when the array is below some capacity?

  3. (couldn’t find a reply link on the last comment about, so here it goes)

    from the manual

    When creating an array on devices which are 100G or larger, mdadm automatically adds an internal bitmap as it will usually be beneficial. This can be suppressed with –bitmap=none or by selecting a different consistency policy with –consistency-policy.

  4. I just retried this with kernel 5.7, and the results are

    mddev=/dev/md0
    diskdev=/dev/sda
    mdadm –fail $mddev $diskdev
    mdadm –remove $mddev $diskdev
    mdadm –re-add $mddev $diskdev –update=no-bbl

    after the –re-add, nothing is returne,d, so apparently, it works. However, after this, the device shows up as a spare in /proc/mdstat, so I tried removing it and adding with –re-add (with or without –update=no-bb, but no luck. So I had to resort to using –add, but that doesn’t take an –update argument, so I’m stuck with the BBL as I was beore.

Leave a Reply

Your email address will not be published. Required fields are marked *