[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V9 PATCH 6/8] pvh dom0: Add and remove foreign pages



>>> On 17.04.14 at 14:36, <tim@xxxxxxx> wrote:
> At 07:50 +0100 on 17 Apr (1397717440), Jan Beulich wrote:
>> >>> On 17.04.14 at 03:37, <mukesh.rathor@xxxxxxxxxx> wrote:
>> > On Wed, 16 Apr 2014 17:00:35 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > 
>> >> >>> On 16.04.14 at 02:12, <mukesh.rathor@xxxxxxxxxx> wrote:
>> >> > In this patch, a new function, p2m_add_foreign(), is added
>> >> > to map pages from foreign guest into dom0 for domU creation.
>> >> > Such pages are typed p2m_map_foreign.  Note, it is the nature
>> >> > of such pages that a refcnt is held during their stay in the p2m.
>> >> > The refcnt is added and released in the low level ept function
>> >> > atomic_write_ept_entry for convenience.
>> >> > Also, p2m_teardown is changed to add a call to allow for the
>> >> > cleanup of foreign pages during it's destruction.
>> >> > Finally, get_pg_owner is changed to allow pvh to map foreign
>> >> > mappings, and is made public to be used by p2m_add_foreign().
>> >> 
>> >> Do you really need to do this at this low a layer? I'm afraid that's
>> >> going to be rather limiting when wanting to use the bits used for the
>> >> p2m type for different purposes in intermediate table entries. I.e. I
>> >> think this special casing should only be done for leaf entries, and
>> >> hence at least one layer further up, or introduce something named
>> >> e.g. atomic_write_epte_leaf().
>> > 
>> > Well, Tim and I went back and forth several times on this over the
>> > last several months (you were cc'd :) ). 
>> 
>> I know, but having worked a lot on the P2M code recently my
>> perspective may have changed.
> 
> [I'm assuming the objection here is to having ther refcounts updated
> in atomic_write_ept_entry, which was the change I requested.]
> My opinion is still very strongly that reference counting must be done
> when the entries change.  Trying to get this kind of thing right in
> the callers is an _enormous_ PITA, as I learned working on the shadow
> pagetables.  It would get very messy (see, e.g. the myriad places
> where p2m op callers individually check for paged/shared entries) and
> it'd be nigh impossible to debug where in several hours of operation
> something changed a p2m entry from foreign to something else without
> dropping a page ref.
> 
> That said, it should be easy enough only to refcount on leaf entries,
> right?  I can't see how that would be incompatible with the
> intermediate-node changes that Jan is working on.

Right - keeping the macro as is and introducing a derived function to
handle the extra requirements on leaf entries would seem quite okay,
so long as error propagation can be done properly.

>> Plus
>> their count multiplies by the number of domains managed (arguably
>> that count should be greater than 1 only for non-Dom0 control
>> domains, and Dom0 won't make it here anyway - which raises the
>> question whether this change is of any practical use under the
>> topic on the patch series, i.e. enabling PVH Dom0).
> 
> I think this is my fault again -- under the same rules of refcounting
> hygiene that demand the ref get/put go right beside the datastructure
> update, I asked Mukesh to make sure that the teardown operation
> correctly dismantled its refcounts.  

Which is still fine as a concept, just how it is being carried out is a
no-go imo. Yet since you can't get at each individual page table leaf
without traversing the page tables, and since traversing the page
tables is expensive, finding a different solution would still seem
preferable. On option besides the one mentioned before might be to
simply change the whole guest address space to PoD (ideally at 1G
granularity, if only PoD supported those) - that ought to cause the
references to be dropped via the mechanism above.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.