[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug controller implementation into Xen


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Fri, 14 Mar 2014 13:25:24 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Fri, 14 Mar 2014 13:25:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPOIH8Enzh1gHLXECN/hU+s7vX4prgd2kAgAAnWnA=
  • Thread-topic: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug controller implementation into Xen

> -----Original Message-----
> From: Ian Campbell
> Sent: 14 March 2014 11:58
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug
> controller implementation into Xen
> 
> On Wed, 2014-03-05 at 14:48 +0000, Paul Durrant wrote:
> > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> > index 2e52470..4176440 100644
> > --- a/tools/libxl/libxl_pci.c
> > +++ b/tools/libxl/libxl_pci.c
> > @@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t
> domid, libxl_device_pci *pcidev, i
> >          }
> >          if ( rc )
> >              return ERROR_FAIL;
> > +
> > +        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
> > +        if (rc < 0) {
> > +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error:
> xc_hvm_pci_hotplug_enable failed");
> > +            return ERROR_FAIL;
> > +        }
> 
> Perhaps I'm misreading this but does this imply that you cannot hotplug
> PCI devices into an HVM guest which wasn't started with a PCI device?
> That doesn't sound right/desirable.
> 

I don't think that is the case. The extra code here is because we're 
intercepting the hotplug controller IO space in Xen so QEMU may well play with 
its hotplug controller device model, but the guest will never see it.

> > diff --git a/xen/include/public/hvm/ioreq.h
> b/xen/include/public/hvm/ioreq.h
> > index e84fa75..40bfa61 100644
> > --- a/xen/include/public/hvm/ioreq.h
> > +++ b/xen/include/public/hvm/ioreq.h
> > @@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
> >  #define ACPI_PM_TMR_BLK_ADDRESS_V1
> (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
> >  #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
> >  #define ACPI_GPE0_BLK_LEN_V1         0x04
> > +#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
> > +#define ACPI_PCI_HOTPLUG_LEN_V1      0x10
> 
> This section is to do with qemu, perhaps having moved this to Xen these
> should be in their own new section?
> 

That sounds reasonable.

> Is there no problem with the availability of the i/o space for the
> different versions of qemu (i.e. they are both the same today?) The AML
> looked like it poked a different thing in the trad case -- so is 0xae00
> unused there?
> 

QEMU will still emulate a PCI hotplug controller but the guest will no longer 
see it. In the case of upstream that io range is now handled by xen, so it 
really really can't get to it. If trad is used then the hotplug controller 
would still be visible if the guest talks to the old IO ranges, but since they 
are not specified in the ACPI table any more it shouldnât have anything to do 
with them. If you think that's a problem then I could hook those IO ranges in 
Xen too and stop the IO getting through.

  Paul

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.