[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in more cases



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 04 September 2018 08:44
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Olaf Hering <olaf@xxxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; xen-devel <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: [PATCH v2 2/2] x86/HVM: split page straddling emulated
> accesses in more cases
> 
> >>> On 03.09.18 at 18:15, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 03 September 2018 16:45
> >>[...]
> >> The extra call to known_glfn() from hvmemul_write() is just to preserve
> >> original behavior; we should consider dropping this (also to make sure
> >> the intended effect of 8cbd4fb0b7 ["x86/hvm: implement
> hvmemul_write()
> >> using real mappings"] is achieved in all cases where it can be achieved
> >> without further rework), or alternatively we perhaps ought to mirror
> >> this in hvmemul_rmw().
> 
> If you really want me to do the change below, could I have an
> opinion on the above as well, as this may imply further re-work
> of the patch?

It seems to me that the behaviour should be mirrored in hvmemul_rmw() to be 
correct.

> 
> >> --- a/xen/arch/x86/hvm/emulate.c
> >> +++ b/xen/arch/x86/hvm/emulate.c
> >> @@ -1041,6 +1041,110 @@ static inline int hvmemul_linear_mmio_wr
> >>                                        pfec, hvmemul_ctxt, translate);
> >>  }
> >>
> >> +static bool known_glfn(unsigned long addr, unsigned int bytes, uint32_t
> >> pfec)
> >> +{
> >> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> >> +
> >> +    if ( pfec & PFEC_write_access )
> >> +    {
> >> +        if ( !vio->mmio_access.write_access )
> >> +            return false;
> >> +    }
> >> +    else if ( pfec & PFEC_insn_fetch )
> >> +    {
> >> +        if ( !vio->mmio_access.insn_fetch )
> >> +            return false;
> >> +    }
> >> +    else if ( !vio->mmio_access.read_access )
> >> +            return false;
> >> +
> >> +    return (vio->mmio_gla == (addr & PAGE_MASK) &&
> >> +            (addr & ~PAGE_MASK) + bytes <= PAGE_SIZE);
> >> +}
> >> +
> >
> > Would it be possible to split the introduction of the above function into a
> > separate patch? AFAICT it does not seem to be intrinsically involved with
> > handle page straddling. It was certainly not there in your RFC patch.
> 
> The need for (or at least desirability of) it became obvious with the addition
> of the logic to the write and rmw paths. It _could_ be split out, but it now 
> is
> a strictly necessary part/prereq of the change here.

I was thinking it would be clearer to introduce known_glfn() in a patch prior 
to this one and then use it in the if statements just after the calls to 
hvmemul_virtual_to_linear() in read and write, possibly re-working rmw at that 
point also.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.