[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context



On 03/04/17 11:10, Jan Beulich wrote:
>>>> On 31.03.17 at 21:50, <andrew.cooper3@xxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -5410,6 +5410,7 @@ int ptwr_do_page_fault(struct vcpu *v, unsigned long 
>> addr,
>>          .ctxt = {
>>              .regs = regs,
>>              .vendor = d->arch.cpuid->x86_vendor,
>> +            .lma = true,
>>              .addr_size = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
>>              .sp_size   = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
>>          },
>> @@ -5564,6 +5565,7 @@ int mmio_ro_do_page_fault(struct vcpu *v, unsigned 
>> long addr,
>>      struct x86_emulate_ctxt ctxt = {
>>          .regs = regs,
>>          .vendor = v->domain->arch.cpuid->x86_vendor,
>> +        .lma = true,
> Hmm, both of these are correct from Xen's pov, but potentially
> wrong from the guest's. Since system segments aren't being
> dealt with here, I think this difference is benign, but I think it
> warrants at least a comment. If we ever meant to emulate
> LLDT, this would become at active problem, as the guest's view
> on gate descriptor layout would differ from that resulting from
> setting .lma to true here. Same for emulate_privileged_op() then.

As discovered in the meantime, things like LLDT/LTR and call gates are
far more complicated.

Still, setting LMA to true here is the right thing to do, as it is an
accurate statement of processor state.  Despite the level of
compatibility for 32bit, a 32bit PV guest isn't entirely isolated from
the fact that Xen is 64bit.

>
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -328,15 +328,14 @@ const struct x86_emulate_ops *shadow_init_emulation(
>>  
>>      sh_ctxt->ctxt.regs = regs;
>>      sh_ctxt->ctxt.vendor = v->domain->arch.cpuid->x86_vendor;
>> +    sh_ctxt->ctxt.lma = hvm_long_mode_active(v);
>>  
>>      /* Segment cache initialisation. Primed with CS. */
>>      creg = hvm_get_seg_reg(x86_seg_cs, sh_ctxt);
>>  
>>      /* Work out the emulation mode. */
>> -    if ( hvm_long_mode_active(v) && creg->attr.fields.l )
>> -    {
>> +    if ( sh_ctxt->ctxt.lma && creg->attr.fields.l )
>>          sh_ctxt->ctxt.addr_size = sh_ctxt->ctxt.sp_size = 64;
>> -    }
>>      else
>>      {
>>          sreg = hvm_get_seg_reg(x86_seg_ss, sh_ctxt);
> The change itself looks fine, but the absence of a PV counterpart
> makes me wonder if there isn't breakage elsewhere. Is the
> "if ( is_hvm_domain(d) )" immediately ahead of the call site of
> this function an unintended leftover? Yet having gone through
> sh_page_fault() twice, I still can't spot where PV would be
> diverted such that it can't come here.

PV guests getting here was excluded when autotranslate was ripped out,
and just out of context above, we ASSERT(is_hvm_vcpu(v));

>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
>> @@ -457,6 +457,9 @@ struct x86_emulate_ctxt
>>      /* Set this if writes may have side effects. */
>>      bool force_writeback;
>>  
>> +    /* Long mode active? */
>> +    bool lma;
> I don't think this should be in the input only section, as an insn
> can have the effect of clearing it.
>
> Also should x86_emulate_wrapper() perhaps assert the field to
> be false for 32-bit (test harness) builds?

I'd prefer not to.  There is nothing in principle wrong with trying to
feed a 32bit compatibility segment through a 32bit build of the test
harness.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.