Zeroing buffers is insufficient

On Thursday I wrote about the problem of zeroing buffers in an attempt to ensure that sensitive data (e.g., cryptographic keys) which is no longer wanted will not be left behind. I thought I had found a method which was guaranteed to work even with the most vexatiously optimizing C99 compiler, but it turns out that even that method wasn't guaranteed to work. That said, with a combination of tricks, it is certainly possible to make most optimizing compilers zero buffers, simply because they're not smart enough to figure out that they're not required to do so — and some day, when C11 compilers become widespread, the memset_s function will make this easy.

There's just one catch: We've been solving the wrong problem.

With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers.

Now, some parts of the stack are easy to zero (assuming a cooperative compiler): The parts which contain objects which we have declared explicitly. Sensitive data may be stored in other places on the stack, however: Compilers are free to make copies of data, rearranging it for faster access. One of the worst culprits in this regard is GCC: Because its register allocator does not apply any backpressure to the common subexpression elimination routines, GCC can decide to load values from memory into "registers", only to end up spilling those values onto the stack when it discovers that it does not have enough physical registers (this is one of the reasons why gcc -O3 sometimes produces slower code than gcc -O2). Even without register allocation bugs, however, all compilers will store temporary values on the stack from time to time, and there is no legal way to sanitize these from within C. (I know that at least one developer, when confronted by this problem, decided to sanitize his stack by zeroing until he triggered a page fault — but that is an extreme solution, and is both non-portable and very clear C "undefined behaviour".)

One might expect that the situation with sensitive data left behind in registers is less problematic, since registers are liable to be reused more quickly; but in fact this can be even worse. Consider the "XMM" registers on the x86 architecture: They will only be used by the SSE family of instructions, which is not widely used in most applications — so once a value is stored in one of those registers, it may remain there for a long time. One of the rare instances those registers are used by cryptographic code, however, is for AES computations, using the "AESNI" instruction set.

It gets worse. Nearly every AES implementation using AESNI will leave two values in registers: The final block of output, and the final round key. The final block of output isn't a problem for encryption operations — it is ciphertext, which we can assume has leaked anyway — but for encryption an AES-128 key can be computed from the final round key, and for decryption the final round key is the AES-128 key. (For AES-192 and AES-256 revealing the final round key provides 128 bits of key entropy.) I am absolutely certain that there is software out there which inadvertantly keeps an AES key sitting in an XMM register long after it has been wiped from memory. As with "anonymous" temporary space allocated on the stack, there is no way to sanitize the complete CPU register set from within portable C code — which should probably come as no surprise, since C, being designed to be a portable language, is deliberately agnostic about the registers and even the instruction set of the target machine.

Let me say that again: It is impossible to safely implement any cryptosystem providing forward secrecy in C.

If compiler authors care about security, we need a new C language extension. After discussions with developers — of both cryptographic code and compilers — over the past couple of years I propose that a function attribute be added with the following meaning:

"This function handles sensitive information, and the compiler must ensure that upon return all system state which has been used implicitly by the function has been sanitized."
While I am not a compiler developer, I don't think this is an entirely unreasonable feature request: Ensuring that registers are sanitized can be done via existing support for calling conventions by declaring that every register is callee-save, and sanitizing the stack should be easy given that that compiler knows precisely how much space it has allocated.

With such a feature added to the C language, it will finally be possible — in combination with memset_s from C11 — to write code which obtains cryptographic keys, uses them without leaking them into other parts of the system state, and then wipes them from memory so that a future system compromise can't reveal the keys. People talk a lot about forward secrecy; it's time to do something about it.

But until we get that language extension, all we can do is hope that we're lucky and our leaked state gets overwritten before it's too late. That, and perhaps avoid using AESNI instructions for AES-128.

Posted at 2014-09-06 08:10 | Permanent link | Comments
blog comments powered by Disqus

Recent posts

Monthly Archives

Yearly Archives


RSS