[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/4] x86/mm: Implement common put_data_pages for put_page_from_l[23]e

  • To: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxx>
  • Date: Fri, 13 Dec 2019 13:58:38 +0000
  • Accept-language: en-GB, en-US
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=George.Dunlap@xxxxxxxxxx; spf=Pass smtp.mailfrom=George.Dunlap@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 13 Dec 2019 13:58:44 +0000
  • Ironport-sdr: SBKcaLHdM51DRuKMV18atOuM2YMiUfT0bh0KljEBG6Dg5ZAJndL8e0UEq9DLxFddUsZsSEtmBe x4SRF1KnGimyL3sGPWkUUviKgm/S5/atzKahT5MScFvBa+rP2So/uTyoOODrE+oP1XkYr+dyUj 2qFP/9uZYcaAfLIDuoUxK5BhE5/Pib0iY2WxdEEj7mghvslTV+sooftOVgSiMTqjRNX5PKgqTT f9dPIITomDyYVuErm80/jim1a+gQzJkfXC43j6UEBCG/T14cLOBC87LIGaUyvm4+JvfMeLC4g3 9Zw=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVsRIc7vW8E7np+0OhnNRGrecSVKe22eOAgAEuCoA=
  • Thread-topic: [PATCH 2/4] x86/mm: Implement common put_data_pages for put_page_from_l[23]e

> On Dec 12, 2019, at 7:57 PM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> On 12/12/2019 17:32, George Dunlap wrote:
>> Both put_page_from_l2e and put_page_from_l3e handle having superpage
>> entries by looping over each page and "put"-ing each one individually.
>> As with putting page table entries, this code is functionally
>> identical, but for some reason different.  Moreover, there is already
>> a common function, put_data_page(), to handle automatically swapping
>> between put_page() (for read-only pages) or put_page_and_type() (for
>> read-write pages).
>> Replace this with put_data_pages() (plural), which does the entire
>> loop, as well as the put_page / put_page_and_type switch.
>> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
>> ---
>> NB that I've used the "simple for loop" version to make it easy to see
>> what's going on, rather than the "do { } while()" version which uses &
>> and compare to zero rather than comparing to the max.
> So while I think the change is an improvement, and are fine as
> presented, I'm -1 towards it overall.
> I am going to once again express my firm opinion that the remaining use
> of PV superpages do far more harm than good, and should be removed
> completely.
> We construct dom0 with some superpages for its p2m and/or initrd.
> This turned out to be the issue behind pv-l1tf breaking for dom0 (c/s
> 96f6ee15ad7c), and why we had to ship XSA-273 in an insecure
> configuration (and I'd like to point out that Xen is still in an
> insecure configuration by default.)
> There is a still-outstanding issue with grant map by linear address over
> a superpage where things malfunction, and the only reason this doesn't
> have an XSA is that it is believed to be restricted to dom0 only.
> Finally, an L3_SHIFT loop is a long running operation which we can't put
> a continuation into the middle of, and I bet there are fun things which
> can be done there in the general case.
> Seeing as PV guests decompress and free the initrd, and repositions the
> p2m, none of these superpages tend to survive post boot, so I am
> currently unconvinced that a perf improvement would be anything but
> marginal.
> I certainly don't think it is worth the complexity and corner cases it
> causes is Xen.

That all sounds reasonable, but I don’t really know anything about PV 
superpages in Xen at the moment (in fact I sort of wondered what this code was 

I’d recommend taking this patch as-is, and “someone” can put it on their list 
to get rid of the PV superpages later.  The alternate is I drop this patch from 
the series and “someone" puts the PV superpage removal on their list to do 

(My mind is also involuntarily churning through options to regularize superpage 
promotion and demotion.)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.