[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] x86/HVM: __hvm_copy() should not write to p2m_ioreq_server pages



>>> On 13.11.18 at 11:47, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 13/11/18 10:13, Jan Beulich wrote:
>> Commit 3bdec530a5 ("x86/HVM: split page straddling emulated accesses in
>> more cases") introduced a hvm_copy_to_guest_linear() attempt before
>> falling back to hvmemul_linear_mmio_write(). This is wrong for the
>> p2m_ioreq_server special case. That change widened a pre-existing issue
>> though: Other writes to such pages also need to be failed (or forced
>> through emulation), in particular hypercall buffer writes.
>>
>> Reported-by: ??? <???@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3202,6 +3202,12 @@ static enum hvm_translation_result __hvm
>>          if ( res != HVMTRANS_okay )
>>              return res;
>>  
>> +        if ( (flags & HVMCOPY_to_guest) && p2mt == p2m_ioreq_server )
> 
> While this does address the issue, I'm concerned about hardcoding the
> behaviour here.
> 
> p2m_ioreq_server doesn't mean "I want shadowing properties". It has an
> as-yet unspecified per-ioreq-client meaning.

Why/how is this different from mmio_dm, which then could be
considered having unspecified meaning for reads _and_ writes?
Aren't we simply saying "consider this RAM for reads but MMIO
for writes"?

> We either want to rename p2m_ioreq_server to something which indicates
> its "allow-reads/emulate writes" behaviour, or design a way for the
> ioreq client to specify the behaviour it wants.

Renaming might be worthwhile, but is orthogonal imo. Iirc we had
struggled to find a really suitable (and not overly long) name back
then already.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.