[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug controller implementation into Xen



On Wed, 2014-03-05 at 14:48 +0000, Paul Durrant wrote:
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index 2e52470..4176440 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, 
> libxl_device_pci *pcidev, i
>          }
>          if ( rc )
>              return ERROR_FAIL;
> +
> +        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
> +        if (rc < 0) {
> +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error: 
> xc_hvm_pci_hotplug_enable failed");
> +            return ERROR_FAIL;
> +        }

Perhaps I'm misreading this but does this imply that you cannot hotplug
PCI devices into an HVM guest which wasn't started with a PCI device?
That doesn't sound right/desirable.

> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index e84fa75..40bfa61 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
>  #define ACPI_PM_TMR_BLK_ADDRESS_V1   (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
>  #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
>  #define ACPI_GPE0_BLK_LEN_V1         0x04
> +#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
> +#define ACPI_PCI_HOTPLUG_LEN_V1      0x10

This section is to do with qemu, perhaps having moved this to Xen these
should be in their own new section?

Is there no problem with the availability of the i/o space for the
different versions of qemu (i.e. they are both the same today?) The AML
looked like it poked a different thing in the trad case -- so is 0xae00
unused there?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.