[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 7/7] xen/x86: use PCID feature



>>> On 26.03.18 at 08:49, <jgross@xxxxxxxx> wrote:
> On 23/03/18 16:58, Jan Beulich wrote:
>>>>> On 23.03.18 at 15:11, <jgross@xxxxxxxx> wrote:
>>> On 23/03/18 14:46, Jan Beulich wrote:
>>>> So in the end the question is: Why not use just two PCIDs, and
>>>> allow global pages just like we do now, with the added benefit
>>>> that we no longer need to flush Xen's global TLB entries just
>>>> because we want to get rid of PV guest user ones.
>>>
>>> I can't see how that would work without either needing some more TLB
>>> flushes in order to prevent stale TLB entries or loosing the Meltdown
>>> mitigation.
>>>
>>> Which %cr3/PCID combination should be used in hypervisor, guest kernel
>>> and guest user mode?
>> 
>> Xen would run with PCID 0 (and full Xen mappings) at all times
>> (except early entry and late exit code of course). The guest would
>> run with PCID 1 (and minimal Xen mappings) at all times. The switch
>> of PCID eliminates the need for flushes on the way out and back in.
> 
> You still need the kernel page tables flushed when switching to user
> mode, right?

Of course.

>>> Which pages would be global?
>> 
>> Use of global pages would continue to be as today: Xen has some,
>> and guest user mode has some. Of course it is quite possible that
>> the use of global pages with a single guest PCID is still worse than
>> no global pages with two guest PCIDs, but that's a separate step
>> to take (and measure) imo.
> 
> But global pages of Xen would either make it vulnerable with regard to
> Meltdown or you need a TLB flush again when switching between Xen and
> guest making all the PCID stuff moot.

No - the guest would run with PCID 1 active, and global Xen TLB
entries would exist for PCID 0 only.

> So lets compare the possibilities:
> 
> My approach:
> - no global pages
> - 4 different PCIDs
> - no TLB flushes needed when switching between Xen and guest
> - no TLB flushes needed when switching between guest user and kernel
> - flushing of single pages (guest or Xen) rather simple (4 INVPCIDs)
> - flushing of complete TLB via 1 INVPCID
> 
> 2 PCIDs (Xen and guest), keeping guest user pages as global pages
> - Xen can't use global pages - global bit must be handled dynamically
>   for Xen pages (or do we want to drop global pages e.g. for AMD, too?

As per above - I don't see why Xen couldn't use global pages.
The option of using them is part of why I'm wondering whether
this might be worth looking into.

> - 2 PCIDs
> - no TLB flushes needed when switching between Xen and guest
> - when switching from guest kernel to guest user the kernel pages must
>   be flushed from TLB
> - flushing of single guest user pages needs 2 changes of %cr3 and 2
>   INVLPGs, switch code must be mapped to guest page tables
> - flushing of complete TLB via 1 INVPCID
> 
> So the advantage of the 2 PCID solution are the single TLB entries for
> guest user pages compared to 2 entries for guest user pages accessed by
> the guest kernel or Xen.
> 
> The disadvantage are the flushed guest kernel pages when executing user
> code, the more complicated single user page flushing and the dynamical
> Xen global bit handling.

Right. In order to make forward progress here I think we should
shelve the discussion on the 2-PCID alternative for now. What I'd
like to ask for as a change to your current approach is to use
PCID 0 for Xen rather than running Xen with PCIDs 2 or 3 when
PCIDs are enabled, and (implicitly) with PCID 0 when they're
disabled. Or alternatively don't use PCID 0 at all when PCIDs are
enabled. I'm simply worried of us overlooking a case where PCID
0 TLB entries may be left in place (when switching between PCIDs
enabled and PCIDs disabled) when they should have been flushed,
opening back up a Meltdown-like attack window.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.