[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] further post-Meltdown-bad-aid performance thoughts
>>> On 19.01.18 at 18:00, <george.dunlap@xxxxxxxxxx> wrote: > On 01/19/2018 04:36 PM, Jan Beulich wrote: >>>>> On 19.01.18 at 16:43, <george.dunlap@xxxxxxxxxx> wrote: >>> So what if instead of trying to close the "windows", we made it so that >>> there was nothing through the windows to see? If no matter what the >>> hypervisor speculatively executed, nothing sensitive was visibile except >>> what a vcpu was already allowed to see, >> >> I think you didn't finish your sentence here, but I also think I >> can guess the missing part. There's a price to pay for such an >> approach though - iterating over domains, or vCPU-s of a >> domain (just as an example) wouldn't be simple list walks >> anymore. There are certainly other things. IOW - yes, and >> approach like this seems possible, but with all the lost >> performance I think we shouldn't go overboard with further >> hiding. > > Right, so the next question: what information *from other guests* are > sensitive? > > Obviously the guest registers are sensitive. But how much of the > information in vcpu struct that we actually need to have "to hand" is > actually sensitive information that we need to hide from other VMs? None, I think. But that's not the main aspect here. struct vcpu instances come and go, which would mean we'd have to permanently update what is or is not being exposed in the page tables used. This, while solvable, is going to be a significant burden in terms of synchronizing page tables (if we continue to use per-CPU ones) and/or TLB shootdown. Whereas if only the running vCPU's structure (and it's struct domain) are exposed, no such synchronization is needed (things would simply be updated during context switch). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |