[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 15/17] x86/hvm: use ioreq_t to track in-flight state



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 25 June 2015 10:51
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 15/17] x86/hvm: use ioreq_t to track in-flight state
> 
> >>> On 24.06.15 at 13:24, <paul.durrant@xxxxxxxxxx> wrote:
> > Use an ioreq_t rather than open coded state, size, dir and data fields
> > in struct hvm_vcpu_io. This also allows PIO completion to be handled
> > similarly to MMIO completion by re-issuing the handle_pio() call.
> 
> Aren't you referring to ...
> 
> > @@ -501,11 +501,12 @@ void hvm_do_resume(struct vcpu *v)
> >          (void)handle_mmio();
> 
> ... this one as the reference?
> 
> >          break;
> >      case HVMIO_pio_completion:
> > -        if ( vio->io_size == 4 ) /* Needs zero extension. */
> > -            guest_cpu_user_regs()->rax = (uint32_t)vio->io_data;
> > +        if ( vio->io_req.size == 4 ) /* Needs zero extension. */
> > +            guest_cpu_user_regs()->rax = (uint32_t)vio->io_req.data;
> >          else
> > -            memcpy(&guest_cpu_user_regs()->rax, &vio->io_data, vio-
> >io_size);
> > -        vio->io_state = STATE_IOREQ_NONE;
> > +            memcpy(&guest_cpu_user_regs()->rax, &vio->io_req.data,
> > +                   vio->io_req.size);
> > +        vio->io_req.state = STATE_IOREQ_NONE;
> >          break;
> 
> I.e. shouldn't I expect to see a handle_pio() call here? Or where
> else is this new handle_pio() call going to show up?
> 

Damn. Looks like a hunk got dropped in a rebase somewhere. I need to fix this.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.