[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 01/17] x86/hvm: simplify hvmemul_do_io()



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 17 June 2015 14:31
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v2 01/17] x86/hvm: simplify hvmemul_do_io()
> 
> >>> On 11.06.15 at 17:42, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -51,41 +51,23 @@ static void hvmtrace_io_assist(int is_mmio, ioreq_t
> *p)
> >  }
> >
> >  static int hvmemul_do_io(
> > -    int is_mmio, paddr_t addr, unsigned long *reps, int size,
> > -    paddr_t ram_gpa, int dir, int df, void *p_data)
> > +    int is_mmio, paddr_t addr, unsigned long *reps, unsigned int size,
> > +    uint8_t dir, bool_t df, bool_t data_is_addr, uint64_t data)
> 
> is_mmio surely can become bool_t too?
> 

Yes, it can.

> >  {
> >      struct vcpu *curr = current;
> > -    struct hvm_vcpu_io *vio;
> > +    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
> >      ioreq_t p = {
> >          .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
> >          .addr = addr,
> >          .size = size,
> >          .dir = dir,
> >          .df = df,
> > -        .data = ram_gpa,
> > -        .data_is_ptr = (p_data == NULL),
> > +        .data = data,
> > +        .data_is_ptr = data_is_addr, /* ioreq_t field name is misleading */
> >      };
> > -    unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
> > -    p2m_type_t p2mt;
> > -    struct page_info *ram_page;
> > +    void *p_data = (void *)data;
> 
> While presumably compiling fine, casting uint64_t to a pointer looks
> bogus without an intermediate cast to (unsigned) long or (u)intptr_t.
> 

Ok. I may just change the arg to the function to a untptr_t then.

> > @@ -190,7 +141,12 @@ static int hvmemul_do_io(
> >      p.count = *reps;
> >
> >      if ( dir == IOREQ_WRITE )
> > +    {
> > +        if ( !data_is_addr )
> > +            memcpy(&p.data, p_data, size);
> > +
> >          hvmtrace_io_assist(is_mmio, &p);
> > +    }
> 
> Why would you need to do this _only_ here (i.e. either not at all,
> or not just for tracing purposes)? Or is this just to _restore_ the
> original value for tracing (in which case the question would be
> whether it indeed can get modified)?

Not sure I follow. If hvmemul_do_io is  called with !data_is_addr then the data 
arg will be a pointer to a buffer, rather than a gpa. So, the gpa that was 
first copied into the ioreq (i.e. p) needs to be overwritten with the actual 
data if it's a write.

> 
> > -int hvmemul_do_pio(
> > -    unsigned long port, unsigned long *reps, int size,
> > -    paddr_t ram_gpa, int dir, int df, void *p_data)
> > +int hvmemul_do_io_buffer(
> 
> static?

Ok.

> 
> > +    bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int size,
> > +    uint8_t dir, bool_t df, uint8_t *buffer)
> >  {
> > -    return hvmemul_do_io(0, port, reps, size, ram_gpa, dir, df, p_data);
> > +    int rc;
> > +
> > +    BUG_ON(buffer == NULL);
> > +
> > +    rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 0,
> > +                       (uint64_t)buffer);
> 
> Same remark regarding a cast between uint64_t and a pointer.
> 

Ok.

> > +static int hvmemul_acquire_page(unsigned long gmfn, struct page_info
> **page)
> > +{
> > +    struct vcpu *v = current;
> > +    struct domain *d = v->domain;
> 
> curr and currd please (albeit I don't see the need for the former as
> a local variable).
> 

Ok.

> > +static void hvmemul_release_page(struct page_info *page)
> 
> inline?
> 

Probably, but I was hoping the compiler would make that decision.

> > +int hvmemul_do_io_addr(
> 
> static?
> 

Ok.

> > +    bool_t is_mmio, paddr_t addr, unsigned long *reps,
> > +    unsigned int size, uint8_t dir, bool_t df, paddr_t ram_gpa)
> > +{
> > +    struct page_info *ram_page;
> > +    int rc;
> > +
> > +    rc = hvmemul_acquire_page(paddr_to_pfn(ram_gpa), &ram_page);
> > +    if ( rc != X86EMUL_OKAY )
> > +        return rc;
> > +
> > +    rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 1,
> > +                       (uint64_t)ram_gpa);
> 
> Pointless cast.
> 

Yes, I guess.

> > +/*
> > + * Perform I/O between <port> and <buffer>. <dir> indicates the
> > + * direction: IOREQ_READ means a read from <port> to <buffer> and
> > + * IOREQ_WRITE means a write from <buffer> to <port>. Each access has
> > + * width <size> and up to *<reps> accesses will be performed. If
> > + * X86EMUL_OKAY is returned then <reps> will be updated with the
> number
> > + * of accesses actually performed.
> > + *
> > + * NOTE: If *<reps> is greater than 1, each access will use the
> > + *       <buffer> pointer; there is no implicit interation over a
> > + *       block of memory starting at <buffer>.
> > + */
> > +int hvmemul_do_pio_buffer(uint16_t port,
> > +                          unsigned long *reps,
> 
> Considering the comment - can't you instead drop the reps parameter
> here then?
> 

No. A rep stos does multiple port writes from the same buffer pointer. 

> > @@ -1142,7 +1236,8 @@ static int hvmemul_read_io(
> >  {
> >      unsigned long reps = 1;
> >      *val = 0;
> > -    return hvmemul_do_pio(port, &reps, bytes, 0, IOREQ_READ, 0, val);
> > +    return hvmemul_do_pio_buffer(port, &reps, bytes, IOREQ_READ,
> > +                                 (uint8_t *)val);
> >  }
> >
> >  static int hvmemul_write_io(
> > @@ -1152,7 +1247,8 @@ static int hvmemul_write_io(
> >      struct x86_emulate_ctxt *ctxt)
> >  {
> >      unsigned long reps = 1;
> > -    return hvmemul_do_pio(port, &reps, bytes, 0, IOREQ_WRITE, 0, &val);
> > +    return hvmemul_do_pio_buffer(port, &reps, bytes, IOREQ_WRITE,
> > +                                 (uint8_t *)&val);
> 
> To avoid such bogus casts, please have hvmemul_do_pio_buffer()
> take void * instead.
> 

Ok.

  Paul

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.