|
|
Subscribe / Log in / New account

Two approaches to x86 memory encryption

Did you know...?

LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net.

By Jonathan Corbet
May 11, 2016
Techniques for hardening the security of running systems often focus on access to memory. An attacker who can write (or even read) arbitrary memory regions will be able to take over the system in short order; even the ability to access small regions of memory can often be exploited. One possible defensive technique would be to encrypt the contents of memory so that an attacker can do nothing useful with it, even if access is somehow gained; this type of encryption clearly requires hardware support. Both Intel and AMD are introducing such support in their processors, and patches to enable that support have been posted for consideration; the two manufacturers have taken somewhat different approaches to the problem, though.

Intel's Software Guard Extensions

Intel's offering is called "Software Guard Extensions," or SGX; details can be found on the SGX web page. SGX is built around the idea of creating "enclaves" of protected code and data. One or more ranges of physical memory are set aside as the "enclave page cache"; the contents of that memory (whether data or code) are only accessible to code that is, itself, located within the enclave. That code is callable from outside the enclave, but only via a set of entry points defined when the enclave is set up.

Memory within the enclave is encrypted using an engine built into the processor itself; the key that is used is generated at power-on and is not available to any running code. As a result, according to Intel's page, the contents of the enclave are "protected even when the BIOS, VMM, OS, and drivers are compromised". Those will certainly be appealing words for anybody who has despaired of ever preventing the compromise of any of those components.

This mechanism seems to be aimed at protecting relatively small ranges of memory; its overhead is apparently too high to do more than that. So, for example, one might load a private key and the code to sign data with that key into an enclave. Thereafter, it will be possible for an application to use the key to create signatures, but nobody can gain access to the key itself, even if the kernel itself is compromised.

The SGX patch set, posted by Jarkko Sakkinen, creates a new device (/dev/sgx) which supports a number of ioctl() calls to control the feature. Interestingly, there are no capability checks on the ioctl() calls themselves; anybody who can open the device can set up enclaves — and the default permissions allow access to everybody. A new enclave is created with SGX_IOCTL_ENCLAVE_CREATE, and pages of data are added to it with SGX_IOCTL_ENCLAVE_ADD_PAGE. Things are then made ready to run with SGX_IOCTL_ENCLAVE_INIT. That last operation requires passing in an initialization token containing a hash of the enclave data and an appropriate signature. There does not appear to be a way to delete an enclave once it has been established.

The "appropriate signature" part turns out to be a bit of a sticking point, in that said signature must come from Intel itself. In other words, it is not currently possible for the owner of a system to set up and use an enclave without getting Intel's approval and agreeing to a set of conditions. That means, as Ingo Molnar put it, that "it only allows the execution of externally blessed static binary blobs". Needless to say, there is no shortage of opposition to that idea in the kernel community; Ingo went on to say:

I don't think we can merge any of this upstream until it's clear that the hardware owner running open-source user-space can also freely define/start his own secure enclaves without having to sign the enclave with any external party. I.e. self-signed enclaves should be fundamentally supported as well.

The discussion on the list suggests that Intel does plan to eventually make the feature work without the need for third-party signatures, but it is not clear when that might happen. Meanwhile, there is another roadblock, in that the patches do not actually work — one cannot actually run code in a protected enclave under Linux. Instead, enclaves can only run in the "debug mode," where it's possible to read and manipulate data inside the enclave from the rest of the system. That, obviously, detracts from the utility of the feature. It's not entirely clear why this limitation is in place.

Jarkko had wanted to place the code into the staging tree, where developers could play with it until it was actually made to work, but there is no visible community interest in going along with that plan. So the SGX patches are going to have to wait for some time until (1) they actually work, and (2) the hardware can support enclaves signed by arbitrary parties.

Secure Memory Encryption

AMD's technology is called "Secure Memory Encryption" (or SME); a description can be found in this white paper [PDF]. The patch set from Tom Lendacky adding support for SME came out, by some coincidence, one day after the SGX patches were posted.

SME is, in a sense, a simpler mechanism. Rather than establishing enclaves, a system with SME simply marks a range of memory (even all of memory) for encryption by setting a bit in the relevant page-table entries. The memory controller will then encrypt all data stored to those pages using a key generated at power-on time; all data read from the range will be transparently decrypted. No code running on the processor (not even the kernel) has access to the encryption key. Enabling encryption is said to slightly increase memory latency, but the white paper suggests that the performance impact will normally be quite small.

The SME approach, thus, will not protect memory from an attacker who has compromised the kernel; from the point of view of a running system, the memory might as well not be encrypted at all. Instead, it is intended to protect against cold-boot attacks, snooping on the memory bus, and the disclosure of transient data stored in persistent-memory arrays.

SME can be used as the base for another feature, though, called "Secure Encrypted Virtualization" (SEV), where each virtual machine gets its own key. At that point, the value of the feature will, if it functions as advertised, be significantly higher; it can protect virtual machines from each other, keeping their contents secure even if one of them manages to compromise the host system. Indeed, it should be able to protect virtual machines from the hypervisor itself. In a world where everything seems to be moving toward virtual machines running on shared cloud infrastructure, this kind of protection would be a useful enhancement to the security of the cloud as a whole.

The current patch set only implements SME, though, leaving SEV for the future. If the system is booted with the mem_encrypt=on command-line parameter, encryption will be enabled for all of physical memory, with a few exceptions. Video memory, for example, should not be encrypted; the device tree loaded at boot, if any, will also need to be accessed in the clear. Beyond that, encrypted memory will be used throughout, with most of the system being entirely unaware that the feature is in use.

The SME patches do not have the signature-related issues that came up with SGX, but this feature was not universally acclaimed either. Andy Lutomirski detailed several ways that he thinks the feature could be broken before concluding: "But I guess it's better than nothing." Paolo Bonzini added that the SEV feature is "very limited unless you paravirtualize everything" and worried that it is being oversold as a general mechanism for the securing of virtual machines. He suggested that the work should maybe not be merged in its current form.

Tom acknowledged that the technology has limitations in its current form, saying:

In this first generation of SEV, we are targeting a threat model very similar to the one used by SMEP and SMAP. Specifically, SEV protects a guest from a benign but vulnerable hypervisor, where a malicious guest or unprivileged process exploits a system/hypervisor interface in an attempt to read or modify the guest's memory. But, like SMEP and SMAP, if an attacker has the ability to arbitrarily execute code in the kernel, he would be able to circumvent the control. AMD has a vision for this generation of SEV to be foundational to future generations that defend against stronger attacks.

There have been no statements from the x86 maintainers regarding whether the SME patches would be merged, but it would not be surprising if their approach were similar to the one they have taken with the SGX patches: they will be seriously considered when the full functionality is present and that "vision" has been implemented.

The end result of all this discussion is that we will probably not see support for either manufacturer's memory-encryption technology in a near-term kernel. But the direction that hardware development is taking offers some encouragement. Those of us who have despaired of ever truly securing our software may well be right; we need levels of defense that come into play when the software has failed. Done right, hardware-based defenses can come to the rescue here without taking away our power to secure and control our own systems. Once the hardware reaches that point, Linux will certainly be able to take advantage of those capabilities.

Index entries for this article
KernelSecurity/Memory encryption


(Log in to post comments)

Two approaches to x86 memory encryption

Posted May 11, 2016 8:34 UTC (Wed) by mst@redhat.com (subscriber, #60682) [Link]

> The SME patches do not have the signature-related issues that came up with SGX, but this feature was not universally acclaimed either. Andy Lutomirski detailed several ways that he thinks the feature could be broken ...

IIUC Andy's comments are about ways to break SEV, not SME.

Two approaches to x86 memory encryption

Posted May 11, 2016 11:11 UTC (Wed) by pbonzini (subscriber, #60935) [Link]

Correct, and so was my request for advice about the viability of SEV in its current form. SEV patches actually have not been posted.

Also, it is not really necessary to "paravirtualize everything". Tom explained how to support non-paravirtualized devices and acknowledged that the AMD manual is unclear on this detail.

Two approaches to x86 memory encryption

Posted May 11, 2016 9:56 UTC (Wed) by linuxrocks123 (subscriber, #34648) [Link]

Loop-AmnESia, which is my work against the cold boot attack, will be obsolete on machines with SME enabled. And, unlike Loop-AmnESia and TRESOR, SME will defend against reading SSH/SSL session keys, defend against sectors of the disk copied into the page cache, etc.

I like SME. SGX is also ... nice, sort of ... but its design is obviously geared mostly just for hardware-supported DRM.

Oh, here's a nice review of SGX from 2013: http://theinvisiblethings.blogspot.com/2013/08/thoughts-o...

Two approaches to x86 memory encryption

Posted May 11, 2016 13:31 UTC (Wed) by nix (subscriber, #2304) [Link]

Wow, SGX really does show its apparent heritage as an extension of the part of the odious closed-source-with-access-to-everything Management Engine that implements the DRM protected path, doesn't it? How to make a feature more or less useless in one easy lesson :( here's hoping it gets more useful in future, though the fact that it only works by virtue of a component that is, in itself, a huge gaping security hole that should never have existed is not a good sign.

SME looks sufficiently less odious that, once it is really useful, it might in itself tempt me away from Intel CPUs back to AMD...

Two approaches to x86 memory encryption

Posted May 11, 2016 16:20 UTC (Wed) by luto (subscriber, #39314) [Link]

It's not at all clear to me that the ME has much to do with SGX. The MEE does, but that's a different thing entirely.

Two approaches to x86 memory encryption

Posted May 11, 2016 15:19 UTC (Wed) by luto (subscriber, #39314) [Link]

I have no problem with supporting SME in its current incarnation. I just think that SEV is a bit dangerous in that people might thing it provides guarantees that it does not provide.

The problem with supporting SGX in its current incarnation is that it's a closed ecosystem. (In fact, it's so closed that there appears to be no public way to use it on Linux *at all* right now.)

Two approaches to x86 memory encryption

Posted May 11, 2016 15:47 UTC (Wed) by pbonzini (subscriber, #60935) [Link]

Ok, good that we agree then. I will ask again for advice when the SEV patches are posted. Thanks!

Two approaches to x86 memory encryption

Posted May 11, 2016 17:13 UTC (Wed) by mm7323 (subscriber, #87386) [Link]

I guess the SEV feature also has the disadvantage of making kernel same-page merging between VMs impossible. Or not?

Two approaches to x86 memory encryption

Posted May 11, 2016 20:10 UTC (Wed) by pbonzini (subscriber, #60935) [Link]

Also within the same VM (this is nice to have if you run Windows, because it zeros pages in the background). But it's not a big deal.

Two approaches to x86 memory encryption

Posted May 17, 2016 19:58 UTC (Tue) by robbe (guest, #16131) [Link]

> Also within the same VM

Why? Does the virtual address somehow figure into the encryption key?

> But it's not a big deal.

Do you mean same-page merging does not give much benefits?

FWIW, VMware ESXi had sharing of pages between VMs off by default for about two years, now.

Two approaches to x86 memory encryption

Posted May 19, 2016 10:08 UTC (Thu) by oldtomas (guest, #72579) [Link]

> Does the virtual address somehow figure into the encryption key?

Most probably it is mixed in, as an initialization vector. You don't want two identical blocks to encrypt identically, because that gives clues to an eavesdropper.

If you are encrypting sequentially, you typically use some hash of the last material as IV for the next; if you want random access, the address has to enter somehow (via a hash again).

This page [1] has the gory details (for the TL;DR just look at the cute penguins ;-)

[1] https://en.wikipedia.org/wiki/Cipher_block_chaining

Two approaches to x86 memory encryption

Posted May 11, 2016 23:00 UTC (Wed) by dlang (guest, #313) [Link]

Passing information between the hypervisor and guest will get interesting if the memory is encrypted with different keys (SEV).

But it's probably a good thing as it will force the interface between the two to be well defined or break.

Two approaches to x86 memory encryption

Posted May 12, 2016 3:02 UTC (Thu) by josh (subscriber, #17465) [Link]

(Disclaimer: I work for Intel, and I'm involved with the project to enable Open Source use of SGX.)

> This mechanism seems to be aimed at protecting relatively small ranges of memory; its overhead is apparently too high to do more than that.

SGX supports arbitrarily large enclaves. The encrypted region of physical memory has to be reserved during early boot (using the PRMRR_BASE and PRMRR_MASK MSRs), so typically it's a small portion of physical memory to avoid taking too much memory from the OS; however, that could be configured to be larger if needed. Also, SGX supports the kernel paging memory in and out of the enclave (encrypting it before giving it to the kernel to store), which allows for arbitrarily large enclaves regardless of the amount of reserved physical memory. Typically you do want to minimize the amount of code that you put inside the enclave to minimize your attack surface and the amount of code you trust, but there isn't any architectural limitation on size.

> The discussion on the list suggests that Intel does plan to eventually make the feature work without the need for third-party signatures,

Specifically, we're introducing a new CPU feature to specify a key used to sign the Launch Enclave, which serves as the root of trust. This is already documented in the Intel Software Developer's Manual (http://www.intel.com/content/dam/www/public/us/en/documen...): in table 35-2, "Architectural MSRs", there are four MSRs, IA32_SGXLEPUBKEYHASH{0,1,2,3}, which together provide a SHA256 hash of the public key used to sign the launch enclave. That same section also documents the CPUID bits that indicate the availability of those MSRs.

The launch enclave specified by this hash then implements the policy that determines what enclaves can run (using any policy you choose), providing the root of trust; the hardware architecture then enforces isolation of enclaves. Individual enclaves, in turn, can choose how much to trust each other, and would typically communicate between each other with cryptographic verification. For instance, your ssh-agent enclave wouldn't trust someone else's ssh-agent enclave, but two isolated components run by the same user might, such as a secret-storage enclave and an enclave maintaining a TLS connection.

See section 39.1.4, ""Intel SGX Launch Control Configuration", for more on how the launch control handles launch policy for other enclaves.

> Meanwhile, there is another roadblock, in that the patches do not actually work — one cannot actually run code in a protected enclave under Linux. Instead, enclaves can only run in the "debug mode," where it's possible to read and manipulate data inside the enclave from the rest of the system. That, obviously, detracts from the utility of the feature. It's not entirely clear why this limitation is in place.

The CPU keeps the root of trust as simple as possible: within the CPU, it's just the hash of a key used to sign the launch enclave; code running in that enclave can then implement the security policy for loading other enclaves. Debug mode is designed for development and proof-of-concept work (including of the launch enclave itself), before producing a production enclave and using production keys and signatures. Debug mode doesn't require a signature that chains back to the root of trust. Effectively, this makes it possible to use Skylake as a development platform for enclaves that will also run on future hardware in production mode with an owner-controlled root of trust.

Two approaches to x86 memory encryption

Posted May 13, 2016 12:11 UTC (Fri) by malor (guest, #2973) [Link]

> for enclaves that will also run on future hardware in production mode with an owner-controlled root of trust.

So, in other words, it's not going to be under owner control in this chip generation.

Personally, I wouldn't touch that with a fifty-meter pole. That's just Trusted Computing writ large. It means Intel or any authorized agent can trust my computer, and protect it, even against me.

It has nothing to do with my benefit at all.

Two approaches to x86 memory encryption

Posted May 12, 2016 3:19 UTC (Thu) by TRS-80 (guest, #1804) [Link]

Does SME protect against Rowhammer?

Two approaches to x86 memory encryption

Posted May 12, 2016 4:33 UTC (Thu) by dlang (guest, #313) [Link]

not really (although it makes it a bit harder)

rowhammer is a hardware bug where repeated access at one address can flip a bit at another address. the fixes are to make it harder to know the exact alignment of important memory

encrypting the memory makes such attacks harder because you don't know what bits are being stored (without some other way of watching from the outside, which may exist)

but there are only so 256 possible bit patterns, so it doesn't block it entirely.

Two approaches to x86 memory encryption

Posted May 12, 2016 8:00 UTC (Thu) by pbonzini (subscriber, #60935) [Link]

Alteration of the ciphertext will completely randomize the plaintext. So if you use rowhammer to flip a bit of ciphertext, the contents of the other location will change randomly.

So rowhammer can still be used with SME (e.g. you could affect a key generation operation and produce non-prime p and q), but the result is much less controlled and thus the applicability is smaller.

Two approaches to x86 memory encryption

Posted May 12, 2016 16:11 UTC (Thu) by ballombe (subscriber, #9523) [Link]

If p or q is not prime, then standard RSA signature check/decryption will not work either, so it is not so easy.

Two approaches to x86 memory encryption

Posted May 12, 2016 9:34 UTC (Thu) by jsakkine (subscriber, #80603) [Link]

Just a minor correction. There is ioctl for destroying the enclave.

Two approaches to x86 memory encryption

Posted May 12, 2016 11:20 UTC (Thu) by NAR (subscriber, #1313) [Link]

Thereafter, it will be possible for an application to use the key to create signatures, but nobody can gain access to the key itself, even if the kernel itself is compromised.

The ability to create signatures isn't nearly as useful as actually having the key? So e.g. if a build system of a Linux distribution is compromised, the attacker can't obtain the key and sign packages on his own system, but is able to create and sign malicious packages on the compromised system itself, doesn't he?

Indeed, it should be able to protect virtual machines from the hypervisor itself.

I'm not familiar with these technologies, but isn't it possible for the hypervisor to somehow fake the CPU while SEV is turned on? The guest will happily assume that everything is encrypted, while it isn't.

Two approaches to x86 memory encryption

Posted May 13, 2016 10:46 UTC (Fri) by bytelicker (guest, #92320) [Link]

I like AMD's implementation the most. It is transparent and does not rely on external configuration. Intel's implementation remind me of a application level type of encryption. You could just as well manage your memory as encrypted in your application.

Two approaches to x86 memory encryption

Posted May 16, 2016 20:33 UTC (Mon) by pbonzini (subscriber, #60935) [Link]

They do different things. AMD protects the OS, Intel protects the application (for DRM purposes, more or less).

Two approaches to x86 memory encryption

Posted May 18, 2016 22:50 UTC (Wed) by anguslees (subscriber, #7131) [Link]

How do these compare to arm's TrustZone?

Two approaches to x86 memory encryption

Posted May 19, 2016 20:06 UTC (Thu) by alonz (subscriber, #815) [Link]

Like apples compare to oranges ;) – they are both fruits.

ARM TrustZone just provides signals on the bus saying whether the CPU is in the “trusted” mode or not. (Actually, other bus masters—such as the GPU—generate the same signals as well). And bus slaves then enforce some separation between secure and non-secure memories, all strictly by address comparisons.

Some ARM-based devices also implement memory encryption (as an extra layer, on top of TrustZone) – but this is far from being the norm.


Copyright © 2016, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds