[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XenGT is still regressed on master



>>> On 08.03.19 at 16:18, <igor.druzhinin@xxxxxxxxxx> wrote:
> On 08/03/2019 14:58, Jan Beulich wrote:
>>>>> On 08.03.19 at 15:25, <igor.druzhinin@xxxxxxxxxx> wrote:
>>> On 08/03/2019 11:55, Jan Beulich wrote:
>>>
>>> I like the latter suggestion more. Seems less invasive and prone to
>>> regressions. I'd like to try to implement it. Although I think the
>>> hypervisor check should be more general: like if IOREQ is in progress
>>> don't try to got through fast-path and re-enter IOREQ completion path.
>>>
>>> What if we just check !hvm_ioreq_needs_completion() before returning
>>> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
>>> fails as Paul suggested?
>> 
>> I didn't see such a suggestion, I think, and I'm afraid it would still not
>> be correct in the general case. As said before, Getting back
>> HVMTRANS_okay means the write did actually complete, and no
>> second attempt to do the write should be done.
> 
> What if we don't do hvm_copy() in that case and will go to slow-path
> directly, would that be better?

Ah yes, that looks like a better approach (provided Paul also gives it
his okay). There being an ioreq in flight is a fair indication that we will
want to enter hvmemul_linear_mmio_{read,write}().

One caveat though: What do you suggest to do with page straddling
accesses? The commit introducing these functions was, after all to
deal with this special case. The in-flight request we observe there
could be for the leading or trailing part of the access that's being
re-executed. While these could perhaps be told apart by looking at
the low address bits, similarly what do you do for multi-part (but
perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?

>> If anything I could see the code transition from STATE_IORESP_READY
>> to STATE_IOREQ_NONE in that case, with a comment throughly
>> explaining why that's needed.
> 
> I don't think it's a good idea to transfer IOREQ state in different
> places - IOREQ consumption happens now in hvmemul_do_io() function and
> we need to re-enter it to finally finish with IOREQ processing. If we
> want to change it - the entire infrastructure needs to be re-architected.

Sure - hence my reference to a justifying comment. If at all possible
we should indeed avoid this.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.