[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] MMIO emulation fixes



>>> On 30.08.18 at 10:10, <olaf@xxxxxxxxx> wrote:
> On Wed, Aug 29, Olaf Hering wrote:
> 
>> On Mon, Aug 13, Jan Beulich wrote:
>> 
>> > And hence the consideration of mapping in an all zeros page
>> > instead. This is because of the way __hvmemul_read() /
>> > __hvm_copy() work: The latter doesn't tell its caller how many
>> > bytes it was able to read, and hence the former considers the
>> > entire range MMIO (and forwards the request for emulation).
>> > Of course all of this is an issue only because
>> > hvmemul_virtual_to_linear() sees no need to split the request
>> > at the page boundary, due to the balloon driver having left in
>> > place the mapping of the ballooned out page.
> 
> So how is this bug supposed to be fixed?
> 
> What I see in my tracing is that __hvmemul_read gets called with
> gla==ffff880223bffff9/bytes==8. Then hvm_copy_from_guest_linear fills
> the buffer from gpa 223bffff9 with data, but finally it returns
> HVMTRANS_bad_gfn_to_mfn, which it got from a failed get_page_from_gfn
> for the second page.
> 
> Now things go downhill. hvmemul_linear_mmio_read is called, which calls
> hvmemul_do_io/hvm_io_intercept. That returns X86EMUL_UNHANDLEABLE. As a
> result hvm_process_io_intercept(null_handler) is called, which
> overwrites the return buffer with 0xff.

There are a number of options (besides fixing the issue on the Linux
side, which I continue to not be entirely convinced of being the best
approach): One is Paul's idea of making null_handler actually retrieve
RAM contents when (part of) the access touches RAM. Another might
be to make __hvm_copy() return back what parts of the access could
be read/written (so that MMIO emulation would only be triggered for
the missing piece). A third might be to make the splitting of accesses
more intelligent in __hvmemul_read().

I'm meaning to look into this in some more detail later today, unless
a patch has appeared by then from e.g. Paul.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.