|
|
Subscribe / Log in / New account

Meltdown strikes back: the L1 terminal fault vulnerability

Please consider subscribing to LWN

Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.

By Jonathan Corbet
August 14, 2018
The Meltdown CPU vulnerability, first disclosed in early January, was frightening because it allowed unprivileged attackers to easily read arbitrary memory in the system. Spectre, disclosed at the same time, was harder to exploit but made it possible for guests running in virtual machines to attack the host system and other guests. Both vulnerabilities have been mitigated to some extent (though it will take a long time to even find all of the Spectre vulnerabilities, much less protect against them). But now the newly disclosed "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) brings back both threats: relatively easy attacks against host memory from inside a guest. Mitigations are available (and have been merged into the mainline kernel), but they will be expensive for some users.

Page-table entries

Understanding L1TF requires an understanding of the x86 page-table entry (PTE) format. Remember that, in a virtual-memory system, the memory addresses used by both user space and the kernel do not point directly into physical memory. Instead, the hierarchical page-table structure is used to translate between virtual and physical addresses. At the bottom level of this structure, the PTE tells the processor whether the page is actually present in physical memory, where it is, and a few other details. It looks like this for a 4KB page on an x86-64 system:

[page-table entry]

The page-frame number (PFN) tells the processor where to find the page in physical memory. The other bits control which memory protection key is assigned to the page, access permissions, whether and how the page is cached, whether it is dirty, and more. All of this, though, depends on the present ("P") bit in the least-significant position. If that bit is not set, the page is not actually present in physical memory, and any attempt to reference it will generate a page fault.

For non-present pages, none of the other bits in the page-table entry are meant to be used by the processor, so the kernel can use those bits to store useful information; for example, for pages that have been swapped out, the location in the swap area is stored in the PTE. In other cases, the data left in non-present PTEs is essentially random.

Ignoring the present bit

If the present bit in a given PTE is not set, the PFN number field of that PTE has no defined meaning and the CPU has no business trying to use it. So, naturally, Intel CPUs do exactly that during speculative execution (it would appear that Intel is the only vendor to make this particular mistake). During speculative execution, non-present PTEs are treated as if they were valid, so non-present PTEs can be used to speculatively read whatever data lives in the indicated PFN — but, importantly, only if that data is in the processor's L1 cache. The access is speculative only; the processor will eventually notice that the page is not actually present and generate a page fault instead. But, by the time that happens, the usual sorts of covert channels can be used to exfiltrate the data in whatever page the PTE might have pointed to.

Since this attack goes directly to a physical address, it can in theory read any memory in the system. Notably, that includes data kept within an SGX encrypted enclave, which is supposed to be protected from this kind of thing.

Exploiting this vulnerability requires the ability to run code on the target system. Even then, on its face, this bug is somewhat hard to exploit. Attackers cannot directly create non-present PTEs pointing to a page of interest, so they must depend on such PTEs already existing in their address space. By filling the address space with pages that will eventually get reclaimed or by playing tricks with PROT_NONE mappings, an attacker can essentially throw darts at the system and hope that one hits in an interesting place, but it's a non-deterministic process where it's even hard to tell if one has succeeded.

Nonetheless, the potential for the extraction of important secrets exists, and thus this bug must be defended against. The approach taken here is to simply invert all of the bits in a PTE when it is marked as being not present; that will cause that PTE to point into a nonexistent region of memory. The fix is easy, and the performance cost is almost zero. A quick kernel upgrade, and this problem is solved.

Virtualization

At least, the problem is solved on systems where virtualization is not in use. On systems with virtualized guests then, at a minimum, those guests must also run a kernel using the PTE-inversion technique to protect against attacks. If guests are trusted, or if they cannot install their own kernels, the problem stops here.

But if the system is running with untrusted guests and, in particular, if that system allows those guests to provide their own kernels (as many hosting services do), the situation changes. An attacker can then run a kernel that creates arbitrary non-present PTEs on demand, turning a shot-in-the-dark attack into something that can be targeted with precision. To make an attacker's life even easier, the speculative data reference bypasses the extended page tables in the guest, allowing direct access to physical memory. So an attacker who can install a kernel in a guest instance can attack the host (or other guests) with relative ease. In this context, L1TF can be seen as a limited form of Meltdown that can escape virtualization.

Protecting against hostile guests is a harder task, and the correct answer will depend on the specifics of the workload being run. The first step is to take advantage of the fact that L1TF can only read data that is in the processor's L1 cache. If that cache is cleared every time the kernel transfers control to a virtual machine, there will be no data available for the attacker to read. That is indeed what the kernel will do. This mitigation will be rather more costly, needless to say; how much it costs will depend on the workload. On systems where entries into (and exits from) guests are relatively rare, the cost will be low. On systems where those events are common, the cost could approach a 50% performance hit.

Unfortunately, just clearing the L1 cache is not a complete solution if the CPU is running symmetric multi-threading (SMT or "hyperthreads"). The threads running on that processor share the L1 cache. So, while the hostile guest is running in one thread, an unrelated process could be repopulating the L1 cache with interesting data in the other thread. That clearly reopens the can of worms.

The obvious solution here is to disable SMT, which can potentially protect against other security issues as well. But that clearly comes with a significant performance cost of it own. It is not as bad as simply removing half of the system's processors, but, in a virtual sense, that is exactly what is happening. An alternative is to use CPU affinities to restrict guests to specific processors and to not allow anything else (including, for example, kernel functionality like interrupt handling) to run on those processors. This approach might gain back some performance for specific workloads, but it clearly requires a lot of administrator knowledge about what those workloads are and a lot of manual configuration. It also seems somewhat error-prone.

There is another approach that can be taken to protect hosts from hostile guests: rather than do all of the above, simply disable the use of the extended page-table feature. That forces the system back to the older "shadow page table" mechanism, where the hypervisor retains the ultimate control over all PTEs. This, too, will slow things down significantly, but it provides complete protection since the attacker is no longer able to create non-present PTEs pointing to pages of interest.

As an aside, it's worth pointing out an interesting implication of this vulnerability. Virtualization is generally seen as being more secure than containers due to the extra level of isolation used. But, as we see here, virtualization also requires an extra level of processor complexity that can be the source of security problems in its own right. Systems running container workloads will be only lightly affected by L1TF, while those running virtualization will pay a heavy cost.

Kernel settings

Patched kernels will perform the inversion on non-present PTEs automatically. Since there is no real cost to this technique, there is no reason (and no ability) to turn it off. The flushing of the L1 cache on entry to virtual guests will be done if extended page tables are enabled. The disabling of SMT, though, will not be done by default; administrators of systems running untrusted guests will have to examine the tradeoffs and decide what the best approach is to protect their systems. For people faced with this kind of choice, some more information can be found in Documentation/admin-guide/l1tf.rst.

The 4.19 kernel will contain the mitigations, of course. As of this writing, the 4.18.1, 4.17.15, 4.14.63, 4.9.120, and 4.4.148 updates, containing the fixes, are in the review process with release planned on August 16.

As was the case with the previous rounds, the mitigations for L1TF were worked out under strict embargo. The process appears to have worked a little better this time around, with no real leakage of information to force an early disclosure. One can only wonder how many more of these are known and under embargo now — and how many are yet to be discovered. It seems likely that we will be contending with speculative-execution vulnerabilities for some time yet.

Index entries for this article
KernelSecurity/Meltdown and Spectre
SecurityHardware vulnerabilities
SecurityMeltdown and Spectre


(Log in to post comments)

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 18:29 UTC (Tue) by clopez (guest, #66009) [Link]

This only affects Intel CPUs... AMD is safe.. right?

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 18:32 UTC (Tue) by corbet (editor, #1) [Link]

That is my understanding, yes; this only only affects Intel.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 18:58 UTC (Tue) by danpb (subscriber, #4831) [Link]

Correct, this only affects Intel x86 CPUs. AMD x86 is not affected, nor are non-x86 architectures.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 15:31 UTC (Wed) by Curan (subscriber, #66186) [Link]

Yes, only Intel is affected, according to the kernel documentation.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 23:25 UTC (Wed) by rahvin (guest, #16953) [Link]

The developer of the hack only evaluated Intel, they think AMD does it different so it's probably not vulnerable but I would like to see someone come out and say they tested this hack and it's variants of this technique on AMD before we completely clear it.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 19:16 UTC (Tue) by smoogen (subscriber, #97) [Link]

> Systems running container workloads will be only lightly affected by L1TF, while those running virtualization will pay a heavy cost.

This may be obvious, but are you talking about containers in any environment or only if they are running on baremetal? Many of the container systems I have seen run them inside a virtualized environment sitting on top of baremetal to allow for one thing to do what its best at, and the other to do something else.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 20:31 UTC (Tue) by ssl (guest, #98177) [Link]

The ownership is crucial here. If you own both the virtual infrastructure (=hypervisor) and the container hosting VM's (and all the other guests) then it's not as big problem for you.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 11:48 UTC (Wed) by danpb (subscriber, #4831) [Link]

Even if the same person/organization owns the guest VM and host, the "trusted" guest VM can become "untrustworthy" if some software component in it gets compromised (through one of the countless bugs all complex software has). So if they're relying on use of VMs to isolate their applications from each other, the risk is still notable even in the single owner case.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 3:08 UTC (Thu) by Rearden (subscriber, #35172) [Link]

But, there would be no reason for a "friendly" VM to run the unpatched kernel in the virtualized environment. The issue for hostile hosts only manifests when the "attacker" can run an un-patched kernel in the VM. As long as the VM's kernel is patched to invert the PFN, then it doesn't matter if someone attempts the attack against the VM kernel, it won't be affected.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 18:42 UTC (Thu) by jcm (subscriber, #18262) [Link]

A "trusted" VM doesn't exist in reality. There are always going to be new bugs discovered that a determined attacker can use to compromise and perform privilege escalation. Then that "trusted" kernel becomes whatever they want very quickly. This is a nuance that isn't getting the necessary attention because it's a boring detail...

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 17, 2018 0:01 UTC (Fri) by Rearden (subscriber, #35172) [Link]

I think that argument is pretty reductive, and goes well past individual mitigation for this particular threat, and the reason why it's not strictly required to take extra steps in the case where both the Host and Guest OS's are "trusted".

Of course some futher privilege escalation vulnerability could expose the VM host OS to this, but a further privilege escalation vulnerability would likely also expose all sorts of other things as well, this vulnerability being just one of many.

Big picture security comes down to risk mitigation through a layered approach, depending on the resources available and the risk associated with a particular breach. Some future, possible "privilege escalation" vulnerability must be planned for outside of the rememdy for this specific vulnerability. What I mean is, if your workflow and risk for a system that you own both the VM and Host OS is high enough that a compromise of one could impact imporant data, you probably need to be taking the steps associated with "untrusted" guest VMs anyway.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Oct 24, 2018 6:48 UTC (Wed) by alejluther (subscriber, #5404) [Link]

Yes, it does exist. You are thinking in a VM with a server role serving client requests, so clients specifically access the system, so vulneravilities can be exploited. But VMs could have other services not directly connected or accessible to clients. For example, telcos have VMs working with packets where those packets do not have the VM as the endpoint.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 19:40 UTC (Tue) by pbonzini (subscriber, #60935) [Link]

Note that the "conditional cache flushes" mode should incur only a modest performance penalty (on top of what you get from disabling SMT).

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 20:19 UTC (Tue) by nilsmeyer (guest, #122604) [Link]

Was this the same type of anti-competitive embargo that has been employed with Metldown, giving unfair advantage to the bigger cloud providers?

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 21:46 UTC (Tue) by Sesse (subscriber, #53779) [Link]

Is there a way to enable SMT but only schedule threads from the same user/VM on the hyperthread pair? That would seem to keep security, while still keeping most of the performance advantages.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 22:35 UTC (Tue) by nilsmeyer (guest, #122604) [Link]

You will also need to take care that nothing outside the VM is scheduled there (kernel threads, interrupts etc.).

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 14, 2018 22:37 UTC (Tue) by Sesse (subscriber, #53779) [Link]

Yes—one would think this would be a good feature for a scheduler to have in general. (Think side-channel attacks in general.)

OpenBSD “solves” this by simply not supporting hyperthreading, but that seems too heavy-handed to me.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 18:32 UTC (Wed) by jcm (subscriber, #18262) [Link]

It would be nice if we could guarantee no secrets were loaded, track, and flush them, but this isn't something most general purpose OS stacks running a full environment on a host do today. It is something some hypervisors are doing to mitigate against L1TF, and obviously is going to be investigated over time to improve the state of available mitigations on Linux, but it's very non-trivial.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 18:34 UTC (Wed) by Sesse (subscriber, #53779) [Link]

FWIW, my request was for just “not at the same time” (to make timing attacks much harder), not flushing L1 on every context switch. That's too heavy-handed for most userspace.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 8:00 UTC (Wed) by vbabka (subscriber, #91706) [Link]

Looks like Microsoft went with this approach for Hyper-V: https://blogs.technet.microsoft.com/virtualization/2018/0...

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 14:02 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

Which also goes to show that they had advance knowledge of the vulnerability.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 14:17 UTC (Wed) by JoelWilliamson (guest, #105956) [Link]

It looks like the Core Scheduler for Hyper-V was introduced in 2016. I'm sure Microsoft had some advance knowledge, but I doubt they knew about this before Project Zero discovered Meltdown/Spectre. More likely, Microsoft introduced this as a general mitigation against any leaking between threads sharing a core, or simply to give guests a bit more control of their own scheduling.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 18:46 UTC (Wed) by jcm (subscriber, #18262) [Link]

We should give kudos to Microsoft here. They thought about the potential risks from tight resource sharing some time ago and obviously were able to invest significantly ahead of any one vulnerability such as this one in secret scrubbing/address space isolation/core scheduling/etc. This is something that we'll need to look at in Linux and other OSes over time if we want to make HT totally safe as well. It's a great example of what's possible, but a big lift. We do need to also get over the mindset that SMT threads are "cores". Especially in projects like OpenStack. An SMT thread is not a core, and we should never treat it as such, but it's easy to think in those terms when looking at /proc/cpuinfo output and just counting "cpus".

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 23, 2018 22:56 UTC (Thu) by ssmith32 (subscriber, #72404) [Link]

You are correct. Reading the article makes that clear. The default in server 2016 is the classic scheduler, which does not bind LP to VP 1:1. The core scheduler was introduced as an option in Server 2016, both for security & performance SLA reasons.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 0:52 UTC (Wed) by ncm (guest, #165) [Link]

More top-quality reporting, reminding us how valuable our subscriptions are.

I wonder, though. If we zero out the empty page table entries, how do we know where to look for backing store, in swap, when we fault?

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 2:03 UTC (Wed) by corbet (editor, #1) [Link]

PTEs are not zeroed out, they are bitwise inverted, so the information is still there. Sorry if that wasn't clear.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 7:09 UTC (Wed) by HIGHGuY (subscriber, #62277) [Link]

One thing that I haven't come across anywhere is: how can we be sure that the inverted pte never points into something valid? Doesn't this just shift the problem around?

(I'm sure this was thought through, I just couldn't find why this is ok to do)

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 7:23 UTC (Wed) by pbonzini (subscriber, #60935) [Link]

It would require several terabytes of swap.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 7:55 UTC (Wed) by vbabka (subscriber, #91706) [Link]

The swap size is in fact limited on vulnerable CPU's so that it's not possible to exceed it.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 18:36 UTC (Wed) by jcm (subscriber, #18262) [Link]

There is a limit depending upon configured MAXPHYADDR and the number of bits/translation levels supported. Effectively, even with things like Superdome, there are no boxes shipping today where it's a problem. In theory, it /could/ become a problem prior to future platforms with 5-level paging (extra PA bits) but I raised this very corner case a few months ago to keep an eye on it. Now this is public, I'll ping the vendors I can think of who might be impacted and ask them to further consider for future platforms.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 17, 2018 21:47 UTC (Fri) by willy (subscriber, #9762) [Link]

One of the bits that is inverted is the "Uncached" bit. The CPU will not attempt to speculatively bring a cache line in from an uncached page.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 7:41 UTC (Wed) by marcH (subscriber, #57642) [Link]

> PTEs are not zeroed out, they are bitwise inverted, so the information is still there.

So for decades hardware has tried really hard to hide from software crazy optimizations like instruction-level parallelism, out of order and of course speculative execution.

Now software is more and more hiding data from hardware to indirectly block some of that.

I just can't stop admiring the irony.

> Sorry if that wasn't clear.

It was all there, just not super mega obvious why.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 10:51 UTC (Wed) by roc (subscriber, #30627) [Link]

One very important issue that the article didn't discuss (maybe it deserves its own article?) --- by extracting secrets from Intel's architectural enclaves, the existence of these attacks has severely damaged the SGX remote attestation ecosystem.

> Was the remote attestation protocol affected by Foreshadow?
> Yes. Using Foreshadow we have successfully extracted the attestation keys, used by the Intel Quoting Enclave to vouch for the authenticity of enclaves. As a result, we were able to generate "valid" attestation quotes. Using these counterfeit quotes, successfully "proved" to a remote party that a "genuine" enclave was running while, in fact, the code was running outside of SGX, under our complete control.
> Is SGX long-term storage affected by Foreshadow?
> Yes. As Foreshadow enables an attacker to extract SGX sealing keys, previously sealed data can be modified and re-sealed. With the extracted sealing key, an attacker can trivially calculate a valid Message Authentication Code (MAC), thus depriving the data owner from the ability to detect the modification.

The ecosystem has to be effectively rebooted by distrusting all attestations from enclaves running on non-patched processors, and all sealed data produced by those enclaves.

This attack also allows people to bypass Intel's licensing restrictions and launch arbitrary production enclaves on non-patched processors.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 9:28 UTC (Thu) by nix (subscriber, #2304) [Link]

This attack also allows people to bypass Intel's licensing restrictions and launch arbitrary production enclaves on non-patched processors.
Can we keep this valuable feature while blocking the rest, I wonder? (No doubt we can't.)

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 11:06 UTC (Wed) by roc (subscriber, #30627) [Link]

Is it "allowed" for someone in the limited-disclosure club to tell us just how many speculative-execution vulnerabilities are currently in the pipeline?

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 11:15 UTC (Wed) by amw (subscriber, #29081) [Link]

No, but it might be possible to work it out by observing their behaviour from the outside :-)

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 23:38 UTC (Wed) by ms-tg (subscriber, #89231) [Link]

Applause - I see what you did there

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 12:13 UTC (Wed) by MarcB (subscriber, #101804) [Link]

I previously linked to https://www.heise.de/ct/artikel/Exclusive-Spectre-NG-Mult...

So far, it has proven correct about dates - May and August - as well as impact:"Specifically, an attacker could launch exploit code in a virtual machine (VM) and attack the host system from there...Intel's Software Guard Extensions (SGX), which are designed to protect sensitive data on cloud servers, are also not Spectre-safe".

So, this provides a lower boundaty.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 20:09 UTC (Wed) by roc (subscriber, #30627) [Link]

Thanks for that!

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 13:26 UTC (Wed) by fuhchee (guest, #40059) [Link]

How many more such bugs are needed to undermine confidence in shared cloud servers, and have companies retrench to on-premises computing? It would be ironic if Intel were to benefit from their bugs by virtue of motivating a purchasing stampede for private data center hardware.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 14:02 UTC (Wed) by adam820 (subscriber, #101353) [Link]

They probably make more from the massive horizontal growth of cloud providers. They win either way, really; how many bugs are needed to make a shift to a different architecture (POWER, maybe?), or to stop using Intel-based arch's altogether?

Even better would be to spend more time with more researchers looking for this kind of stuff before these devices ever get released. Can't catch 'em all, though.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 23:32 UTC (Wed) by rahvin (guest, #16953) [Link]

Simple, move to AMD, their processors are significantly different and have suffered from very few of the Spectre attacks to which Intel has been vulnerable. From what I've seen it looks like Intel took shortcuts for performance reasons where AMD appears to have done it as securely as possible for the most part. A move to AMD also keeps you on x86 which is significantly cheaper than moving to something like Power.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 16:29 UTC (Wed) by zdzichu (subscriber, #17118) [Link]

If I've been stocking private DC, I would be buying AMD.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 21:52 UTC (Wed) by NAR (subscriber, #1313) [Link]

The existence of bugs doesn't cause problems on their own. The actual exploitation of bugs would cause problems. So unless a worm comes around that steals credit card information from all AWS instances, people won't leave the cloud.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 15, 2018 22:14 UTC (Wed) by marcH (subscriber, #57642) [Link]

Yeah, because we should be very careful to keep "top-secret" the credit card numbers, addresses, date of births and social security numbers that we... keep handing out left and right and that can all be bought for next to nothing on the dark web.

I get your actual point, it's just that you could have chosen a real example as opposed to propagating the American security myth that confuses login and password.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 18, 2018 22:55 UTC (Sat) by gus3 (guest, #61103) [Link]

Or you could keep your credit rating as poor as possible. Then if someone steals your ID & tries to get a loan using it, the joke's on them. (I would love to see the bank manager's attempts to stifle the laughter...)

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 19, 2018 21:36 UTC (Sun) by zlynx (guest, #2285) [Link]

A friend of mine had his identity stolen and it actually improved his credit. It was probably an illegal immigrant who just wanted an SSN to get a job, etc.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 15:51 UTC (Thu) by k8to (guest, #15413) [Link]

Keep in mind that "private clouds" often have real security concerns between their various workloads as well. When you're operating at significant scale you have a variety of problems. You have no idea what those various teams are doing, or what software they're running. They run at lot of third party software too, and no one has really analyzed it in every detail to know what it does.

This is mostly sane to do if you set policies that control what software can access what data, but this type of exploit is about circumventing that.

So I don't really see going private cloud as a solution for this type of problem.

You could go "non-converged" and isolate workloads, but I don't think that's on the cards.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 23, 2018 10:20 UTC (Thu) by davidgerard (guest, #100304) [Link]

You can run AWS instances on separate hardware, or even dedicated hardware. It just costs more. (And some fancy AWS functions aren't available at the higher levels of separation, though I'm not sure on specifics.)

At this stage I think it'd be remarkable if IT in general goes back to in-house hosting from the cloud providers. Renting compute just makes IT management so ridiculously easier. Particularly when you get into Terraform etc, where you can literally program what infrastructure you have. I haven't been in a machine room for over five years now, and have no plans to go back.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 5:18 UTC (Thu) by jcm (subscriber, #18262) [Link]

There is a third alternative path to mitgation, and that is to completely refactor the virt stack on the host so that we either don't load secrets or scrub them when we do. The technology MS announced called HyperClean is essentially doing this in Hyper-V. To do it in Linux with a full host OS stack would be extremely tough, but if folks want to use HT with untrusted workloads, then it's one of the options worthy of considering in the longer term. The thing with flushing secrets and limiting them is it's probably going to come in useful any time there's some kind of information disclosure vulnerability in the future.

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 23, 2018 21:08 UTC (Thu) by Wol (subscriber, #4433) [Link]

Just don't fall foul of the wi-fi bug, where the secret was flushed immediately after the first negotiation, and if an attacker intercepted this and retried the negotiation, they knew the secret had been reset to 0 ...

WHOOPS!

Cheers,
Wol

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 16, 2018 18:03 UTC (Thu) by k8to (guest, #15413) [Link]

Not quite on topic, but are there good resources describing any mitigation that is coming or has come for any of these problems? In other words, are there generations of CPU cores which won't be suffering from some types of problems like this in the future?

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 17, 2018 8:50 UTC (Fri) by marcH (subscriber, #57642) [Link]

Afraid we're not quite yet in the age of "Open Hardware"

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 18, 2018 17:31 UTC (Sat) by k8to (guest, #15413) [Link]

Nice joke, but I meant "this revision of these cpus have closed the door to this category of attack in this way".

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 20, 2018 17:45 UTC (Mon) by abatters (✭ supporter ✭, #6932) [Link]

The Cascade Lake server platform, shipping later this year, should contain the first round of Intel's hardware mitigations.

Anandtech: Intel at Hot Chips 2018: Showing the Ankle of Cascade Lake

Anandtech: An Interview with Lisa Spelman, VP of Intel’s DCG: Discussing Cooper Lake and Smeltdown

Meltdown strikes back: the L1 terminal fault vulnerability

Posted Aug 28, 2018 20:16 UTC (Tue) by loch (guest, #113644) [Link]

"...for pages that have been swapped out, the location in the swap area is stored in the PTE. In other cases, the data left in non-present PTEs is essentially random."

I'm a bit confused, what are the other cases? In what situation would a page be non-present, but also not in swap?

Non-present, not in swap

Posted Aug 28, 2018 21:10 UTC (Tue) by corbet (editor, #1) [Link]

File-backed pages are the most common example of pages that can be non-present but not in swap.


Copyright © 2018, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds