[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XenGT is still regressed on master



>>> On 08.03.19 at 14:37, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 08/03/2019 11:55, Jan Beulich wrote:
>>>>> On 07.03.19 at 13:46, <igor.druzhinin@xxxxxxxxxx> wrote:
>>> We've noticed that there is still a regression with p2m_ioreq_server P2M
>>> type on master. Since the commit 940faf0279 (x86/HVM: split page
>>> straddling emulated accesses in more cases) the behavior of write and
>>> rmw instruction emulation changed (possibly unintentionally) so that it
>>> might not re-enter hvmemul_do_io() on IOREQ completion which should be
>>> done in order to avoid breaking IOREQ state machine. What we're seeing
>>> instead is a domain crash here:
>>>
>>> static int hvmemul_do_io(
>>>     bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int
>>> ...
>>>     case STATE_IORESP_READY:
>>>         vio->io_req.state = STATE_IOREQ_NONE;
>>>         p = vio->io_req;
>>>
>>>         /* Verify the emulation request has been correctly re-issued */
>>>         if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
>>>              (p.addr != addr) ||
>>>              (p.size != size) ||
>>>              (p.count > *reps) ||
>>>              (p.dir != dir) ||
>>>              (p.df != df) ||
>>>              (p.data_is_ptr != data_is_addr) ||
>>>              (data_is_addr && (p.data != data)) )
>>>             domain_crash(currd);
>>>
>>> This is happening on processing of the next IOREQ after the one that
>>> hasn't been completed properly due to p2mt being changed in IOREQ
>>> handler by XenGT kernel module. So it hit HVMTRANS_okay case in
>>> linear_write() helper on the way back and didn't re-enter hvmemul_do_io().
>> Am I to take this to mean that the first time round we take the
>> HVMTRANS_bad_gfn_to_mfn exit from __hvm_copy() due to finding
>> p2m_ioreq_server, but in the course of processing the request the
>> page's type gets changed and hence we don't take that same path
>> the second time?
> 
> I believe so, yes.
> 
>> If so, my first reaction is to blame the kernel
>> module: Machine state (of the VM) may not change while processing
>> a write, other than to carry out the _direct_ effects of the write. I
>> don't think a p2m type change is supposed to be occurring as a side
>> effect.
> 
> This is an especially unhelpful point of view, (and unreasonable IMO),
> as you pushed for this interface over the alternatives which were
> proposed originally.

I have to admit that I don't recall any details of that discussion, and
hence also whether all the implications (including the behind-our-
back change of p2m type) were actually both understood and put
on the table. Nor do I recall what the alternatives were.

> Responding to an emulation request necessarily involves making state
> changes in the VM.  When the state change in question is around the
> tracking of shadow pagetables, the change is non-negotiable as far as
> the higher level functionality is concerned.

Bare hardware doesn't know anything like p2m types, so whether a
change like this is acceptable (or even necessary) is questionable at
least. The "physical" properties of memory, after all, don't normally
change at all while a system is up. We're bending the rules anyway.

>>> The bug could be mitigated by the following patch but since it's you who
>>> introduced this helper you might have better ideas how to avoid the
>>> problem in a clean way here.
>>>
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
>>> unsigned int bytes, void *p_data,
>>>      {
>>>          unsigned int offset, part1;
>>>
>>> -    case HVMTRANS_okay:
>>> -        return X86EMUL_OKAY;
>>> -
>>>      case HVMTRANS_bad_linear_to_gfn:
>>>          x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
>>>          return X86EMUL_EXCEPTION;
>>>
>>> +    case HVMTRANS_okay:
>>>      case HVMTRANS_bad_gfn_to_mfn:
>>>          offset = addr & ~PAGE_MASK;
>>>          if ( offset + bytes <= PAGE_SIZE )
>> This is (I'm inclined to say "of course") not an appropriate change in
>> the general case: Getting back HVMTRANS_okay means the write
>> was carried out, and hence it shouldn't be tried to be carried out a
>> 2nd time.
> 
> I agree - this isn't a viable fix but it does help to pinpoint the problem.
> 
>> I take it that changing the kernel driver would at best be sub-optimal
>> though, so a hypervisor-only fix would be better.
> 
> This problem isn't specific to p2m_ioreq_server.  A guest which balloons
> in a frame which is currently the target of a pending MMIO emulation
> will hit the same issue.

Hmm, good point.

> This is a general problem with the ioreq response state machine
> handling.  My longterm plans for emulation changes would fix this, but
> they definitely aren't a viable short term fix.
> 
> The only viable fix I see in the short term is to mark the ioreq
> response as done before reentering the emulation model, so in the cases
> that we do take a different path, a stale ioreq isn't left in place, but
> I fully admit that I haven't spent too long thinking through the
> implications of this, and whether it is possible in practice.

I can't see how this would help without also the buffering patches
of mine that you want me to re-write basically from scratch:
Re-execution may occur not only once, but multiple times, because
memory accesses wider than 8 bytes can't be sent to qemu. Our
view of the (virtual) machine has to remain consistent throughout
the emulation of a single insn, to guarantee that the same paths
get taken for every re-execution run (or more precisely the initial
parts of it that had been executed at least once already). Any
change we make (as requested by the guest from another vCPU
or by the controlling domain) has to leave unaffected any in-flight
insn emulation.

By doing what you suggest to do you'd paper over one aspect of
the problem, but we'd be liable to find other similar issues down
the road.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.