[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 12/17] x86/hvm: split I/O completion handling from state model



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 25 June 2015 10:41
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 12/17] x86/hvm: split I/O completion handling from
> state model
> 
> >>> On 24.06.15 at 13:24, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -428,26 +429,12 @@ static void hvm_io_assist(ioreq_t *p)
> >          vio->io_state = HVMIO_completed;
> >          vio->io_data = p->data;
> >          break;
> > -    case HVMIO_handle_mmio_awaiting_completion:
> > -        vio->io_state = HVMIO_completed;
> > -        vio->io_data = p->data;
> > -        (void)handle_mmio();
> > -        break;
> > -    case HVMIO_handle_pio_awaiting_completion:
> > -        if ( vio->io_size == 4 ) /* Needs zero extension. */
> > -            guest_cpu_user_regs()->rax = (uint32_t)p->data;
> > -        else
> > -            memcpy(&guest_cpu_user_regs()->rax, &p->data, vio->io_size);
> > -        break;
> >      default:
> >          break;
> >      }
> >
> > -    if ( p->state == STATE_IOREQ_NONE )
> > -    {
> > -        msix_write_completion(curr);
> > -        vcpu_end_shutdown_deferral(curr);
> > -    }
> > +    msix_write_completion(curr);
> > +    vcpu_end_shutdown_deferral(curr);
> >  }
> 
> Doesn't this mean that these can now potentially be called more than
> once for a single instruction (due to retries)? That's presumably not a
> problem for vcpu_end_shutdown_deferral(), but I'm uncertain about
> msix_write_completion().
> 

Actually I think this is the wrong place for this now. It should have been 
moved in or prior to patch 11. I found the need for a completion function for 
stdvga so I'll use the same mechanism for msix_write_completion(). As you say, 
vcpu_end_shutdown_deferral() can stay.

> > @@ -508,8 +496,36 @@ void hvm_do_resume(struct vcpu *v)
> >          }
> >      }
> >
> > -    if ( vio->mmio_retry )
> > +    io_completion = vio->io_completion;
> > +    vio->io_completion = HVMIO_no_completion;
> > +
> > +    switch ( io_completion )
> > +    {
> > +    case HVMIO_no_completion:
> > +        break;
> > +    case HVMIO_mmio_completion:
> >          (void)handle_mmio();
> > +        break;
> > +    case HVMIO_pio_completion:
> > +        if ( vio->io_size == 4 ) /* Needs zero extension. */
> > +            guest_cpu_user_regs()->rax = (uint32_t)vio->io_data;
> > +        else
> > +            memcpy(&guest_cpu_user_regs()->rax, &vio->io_data, vio-
> >io_size);
> > +        vio->io_state = HVMIO_none;
> > +        break;
> > +    case HVMIO_realmode_completion:
> > +    {
> > +        struct hvm_emulate_ctxt ctxt;
> > +
> > +        hvm_emulate_prepare(&ctxt, guest_cpu_user_regs());
> > +        vmx_realmode_emulate_one(&ctxt);
> > +        hvm_emulate_writeback(&ctxt);
> > +
> > +        break;
> > +    }
> > +    default:
> > +        BUG();
> 
> I'd prefer such to be ASSERT_UNREACHABLE(), as I don't see
> getting here to be a reason to crash the hypervisor (and the
> guest would likely crash in such a case anyway). Or maybe
> fold in the first case and make this
> ASSERT(io_completion == HVMIO_no_completion).
> 

Ok.

> > @@ -46,9 +51,10 @@ struct hvm_vcpu_asid {
> >
> >  struct hvm_vcpu_io {
> >      /* I/O request in flight to device model. */
> > -    enum hvm_io_state   io_state;
> > -    unsigned long       io_data;
> > -    int                 io_size;
> > +    enum hvm_io_state      io_state;
> > +    unsigned long          io_data;
> > +    int                    io_size;
> 
> Unless this can validly be negative, please make this unsigned int
> if you already touch it.
> 

Sure.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.