[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



>>> On 22.06.16 at 11:16, <george.dunlap@xxxxxxxxxx> wrote:
> On 22/06/16 07:39, Jan Beulich wrote:
>>>>> On 21.06.16 at 16:38, <george.dunlap@xxxxxxxxxx> wrote:
>>> On 21/06/16 10:47, Jan Beulich wrote:
>>>>>>>> And then - didn't we mean to disable that part of XenGT during
>>>>>>>> migration, i.e. temporarily accept the higher performance
>>>>>>>> overhead without the p2m_ioreq_server entries? In which case
>>>>>>>> flipping everything back to p2m_ram_rw after (completed or
>>>>>>>> canceled) migration would be exactly what we want. The (new
>>>>>>>> or previous) ioreq server should attach only afterwards, and
>>>>>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>>>>>> deems necessary.
>>>>>>>>
>>>>>>> Well, I agree this part of XenGT should be disabled during migration.
>>>>>>> But in such
>>>>>>> case I think it's device model's job to trigger the p2m type
>>>>>>> flipping(i.e. by calling
>>>>>>> HVMOP_set_mem_type).
>>>>>> I agree - this would seem to be the simpler model here, despite (as
>>>>>> George validly says) the more consistent model would be for the
>>>>>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>>>>>> only if there was an easy way for the hypervisor to enumerate all
>>>>>> p2m_ioreq_server pages.
>>>>>
>>>>> Well, for me, the "easy way" means we should avoid traversing the whole 
>>>>> ept
>>>>> paging structure all at once, right?
>>>>
>>>> Yes.
>>>
>>> Does calling p2m_change_entry_type_global() not satisfy this requirement?
>> 
>> Not really - that addresses the "low overhead" aspect, but not the
>> "enumerate all such entries" one.
> 
> I'm sorry, I think I'm missing something here.  What do we need the
> enumeration for?

We'd need that if we were to do the cleanup in the hypervisor (as
we can't rely on all p2m entry re-calculation to have happened by
the time a new ioreq server registers for the type).

>>> Well I had in principle already agreed to letting this be the interface
>>> on the previous round of patches; we're having this discussion because
>>> you (Jan) asked about what happens if an ioreq server is de-registered
>>> while there are still outstanding p2m types. :-)
>> 
>> Indeed. Yet so far I understood you didn't like de-registration to
>> both not do the cleanup itself and fail if there are outstanding
>> entries.
> 
> No, I think regarding deregistering while there were outstanding
> entries, I said the opposite -- that there's no point in failing the
> de-registration, because a poorly-behaved ioreq server may just ignore
> the error code and exit anyway.  Although, thinking on it again, I
> suppose that an error code would allow a buggy ioreq server to know that
> it had screwed up somewhere.

Not exactly, I think: The failed de-registration ought to lead to failure
of an attempt to register another ioreq server (or the same one again),
which should make the issue quickly noticable.

> But either way, from the "robustness"
> perspective, the result would almost certainly be a dangling ioreq
> server registration *in addition* to the dangling p2m entries; so the
> difference is just an interface tweak to aid in debugging, not worth
> insisting on given the required work.

So yes, observable behavior wise there shouldn't be any difference.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.