[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug controller implementation into Xen


  • To: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Thu, 30 Jan 2014 16:06:39 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 30 Jan 2014 16:07:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPHcZafuHlqvfYM0ajpdfz4KchSJqdWzcAgAARQVA=
  • Thread-topic: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug controller implementation into Xen

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
[snip]
> > +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> > +                              domid_t domid,
> > +                              uint32_t slot)
> 
> Take enable as a parameter and save having 2 almost identical functions?
> 

I was in two minds. Internally it's a single HVMOP with an enable/disable 
parameter (as we can see below) but I thought it was neater to keep the 
separation at the API.

> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 1;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +                               domid_t domid,
> > +                               uint32_t slot)
> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 0;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
[snip]
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +                          domid_t domid,
> > +                          uint32_t slot);
> > +
> 
> tabs/spaces
> 

Yep.

[snip]
> > +int gpe_init(struct domain *d)
> > +{
> > +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> > +
> > +    hp->domain = d;
> > +
> > +    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);
> 
> This size is known at compile time - what about arrays inside
> hvm_hotplug and forgo the small memory allocations?
> 

Yes, that seems reasonable.

[snip]
> > +struct hvm_hotplug {
> > +    struct domain   *domain;
> 
> This appears to be found by using container_of(), which will help keep
> the size of struct domain down.
> 

Sure.

> > +    uint8_t         *gpe_sts;
> > +    uint8_t         *gpe_en;
> > +
> > +    /* PCI hotplug */
> > +    uint32_t        slot_up;
> > +    uint32_t        slot_down;
> > +};
> > +
> >  struct hvm_domain {
> >      struct list_head        ioreq_server_list;
> >      spinlock_t              ioreq_server_lock;
> > @@ -73,6 +83,8 @@ struct hvm_domain {
> >      uint32_t                pci_cf8;
> >      spinlock_t              pci_lock;
> >
> > +    struct hvm_hotplug      hotplug;
> > +
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-
> x86/hvm/io.h
> > index 86db58d..072bfe7 100644
> > --- a/xen/include/asm-x86/hvm/io.h
> > +++ b/xen/include/asm-x86/hvm/io.h
> > @@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
> >  void stdvga_deinit(struct domain *d);
> >
> >  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> > +
> > +int gpe_init(struct domain *d);
> > +void gpe_deinit(struct domain *d);
> > +
> > +void pci_hotplug(struct domain *d, int slot, bool_t enable);
> > +
> >  #endif /* __ASM_X86_HVM_IO_H__ */
> >
> > diff --git a/xen/include/public/hvm/hvm_op.h
> b/xen/include/public/hvm/hvm_op.h
> > index 6b31189..20a53ab 100644
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
> >  typedef struct xen_hvm_destroy_ioreq_server
> xen_hvm_destroy_ioreq_server_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
> >
> > +#define HVMOP_pci_hotplug 24
> > +struct xen_hvm_pci_hotplug {
> > +    domid_t domid;          /* IN - domain to be serviced */
> > +    uint8_t enable;         /* IN - enable or disable? */
> > +    uint32_t slot;          /* IN - slot to enable/disable */
> 
> Reordering these two will make the structure smaller.
> 

It will indeed.

   Paul

> ~Andrew
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.