[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



>>> On 21.06.16 at 11:16, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 6/21/2016 4:22 PM, Jan Beulich wrote:
>>>>> On 21.06.16 at 09:45, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> On 6/20/2016 9:38 PM, Jan Beulich wrote:
>>>>>>> On 20.06.16 at 14:06, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>>>> However, if live migration is started(all pte entries invalidated
>>>>> again), resolve_misconfig() would
>>>>> change both gfn A's and gfn B's p2m type back to p2m_ram_rw, which means
>>>>> the emulation of
>>>>> gfn B would fail.
>>>> Why would it? Changes to p2m_ram_logdirty won't alter
>>>> p2m_ioreq_server entries, and hence changes from it back to
>>>> p2m_ram_rw won't either.
>>> Oh, above example is based on the assumption that resolve_misconfig() is
>>> extended
>>> to handle the p2m_ioreq_server case(see my "Suppose resolve_misconfig()
>>> is modified...").
>>> The code change could be something like below:
>>>
>>> @@ -542,10 +542,14 @@ static int resolve_misconfig(struct p2m_domain
>>> *p2m, unsigned long gfn)
>>>
>>> -                    if ( e.recalc && p2m_is_changeable(e.sa_p2mt) )
>>> +                   if ( e.recalc )
>>>                        {
>>> -                         e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn +
>>> i, gfn + i)
>>> -                                     ? p2m_ram_logdirty : p2m_ram_rw;
>>> +                         if ( e.sa_p2mt == p2m_ioreq_server )
>>> +                             e.sa_p2mt = p2m_ram_rw;
>>> +                         else if ( p2m_is_changeable(e.sa_p2mt) )
>>> +                             e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn
>>> + i, gfn + i)
>>> +                                         ? p2m_ram_logdirty : p2m_ram_rw;
>>> +
>>>                             ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt,
>>> e.access);
>>>                        }
>>>                        e.recalc = 0;
>>>
>>> With changes like this, both p2m types of gfn A and gfn B from above example
>>> would be set to p2m_ram_rw if log dirty is enabled.
>> Above modification would convert _all_ p2m_ioreq_server into
>> p2m_ram_rw, irrespective of log-dirty mode being active. Which
>> I don't think is what you want.
> 
> Well, this is another situation I found very interesting: without log-dirty,
> this approach actually works. :)

And what if the recalc flag gets set for some other reason?

>>>> And then - didn't we mean to disable that part of XenGT during
>>>> migration, i.e. temporarily accept the higher performance
>>>> overhead without the p2m_ioreq_server entries? In which case
>>>> flipping everything back to p2m_ram_rw after (completed or
>>>> canceled) migration would be exactly what we want. The (new
>>>> or previous) ioreq server should attach only afterwards, and
>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>> deems necessary.
>>>>
>>> Well, I agree this part of XenGT should be disabled during migration.
>>> But in such
>>> case I think it's device model's job to trigger the p2m type
>>> flipping(i.e. by calling
>>> HVMOP_set_mem_type).
>> I agree - this would seem to be the simpler model here, despite (as
>> George validly says) the more consistent model would be for the
>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>> only if there was an easy way for the hypervisor to enumerate all
>> p2m_ioreq_server pages.
> 
> Well, for me, the "easy way" means we should avoid traversing the whole ept
> paging structure all at once, right?

Yes.

> I have not figured out any clean 
> solution
> in hypervisor side, that's one reason I'd like to left this job to 
> device model
> side(another reason is that I do think device model should take this 
> responsibility).

Let's see if we can get George to agree.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.