[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 1/9] x86/xpti: avoid copying L4 page table contents when possible



On 22/04/18 18:39, Tim Deegan wrote:
> At 19:11 +0200 on 21 Apr (1524337893), Juergen Gross wrote:
>> On 21/04/18 15:32, Tim Deegan wrote:
>>> At 09:44 +0200 on 19 Apr (1524131080), Juergen Gross wrote:
>>>> Another alternative would be to pass another flag to the callers to
>>>> signal the need for a flush. This would require quite some modifications
>>>> to shadow code I'd like to avoid, though. OTOH this way we could combine
>>>> flushing the tlb and the root page tables. Tim, any preferences?
>>>
>>> This sounds a promising direction but it should be doabl without major
>>> surgery to the shadow code.  The shadow code already leaves old sl4es
>>> visible (in TLBs) when it's safe to do so, so I think the right place
>>> to hook this is on the receiving side of the TLB flush IPI.  IOW as
>>> long as:
>>>  - you copy the L4 on context switch; and
>>>  - you copy it on the TLB flush IPI is received
>>> then you can rely on the existing TLB flush mechanisms to do what you need.
>>> And shadow doesn't have to behave differently from 'normal' PV MM.
>>
>> It is not so easy. The problem is that e.g. a page fault will flush the
>> TLB entry for the page in question, but it won't lead to the L4 to be
>> copied.
> 
> Oh yes, I see; thanks for the explanation.  It might be worth copying
> what the hardware does here, and checking/propagating the relevant l4e
> in the PV pagefault handler.

While in the long run being an interesting option I'm not sure I want
to go this route for 4.11. There might be nasty corner cases and I think
such a lazy approach is much more error prone than doing explicit
updates of the L4 table on the affected cpus in case of a modified
entry. After all this is the way we are doing it for the non-shadow
case, too.

I think we should either do the explicit call of flush_mask() in
shadow_set_l4e() or propagate the need for the flush up to the caller.

In case you want to discuss this problem in more detail I think we
should do that on IRC for speeding things up a little bit.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.