[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] ioreq-server: handle IOREQ_TYPE_PCI_CONFIG in assist function



On 27/01/15 19:06, Wei Liu wrote:
> QEMU stubdom will read PCI config space when enumerating PCI devices.
> Xen should return ~0 when there is no suitable ioreq server to dispatch
> the request.
>
> Without this patch, QEMU stubdom will fail to start because hvmloader
> fails following assertion:
>
> 118         ASSERT((devfn != PCI_ISA_DEVFN) ||
> 119                ((vendor_id == 0x8086) && (device_id == 0x7000)));
>
> because vendor_id and device_id are 0.
>
> This fixes a regression for QEMU stubdom. It should be backported to 4.5
> as well.
>
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

The patch is clearly a good bugfix, but I am not sure the commit message
is accurate.

A Qemu stubdom is a PV guest, not an HVM one, so will not be triggering
this path in Xen.  It is HVMLoader which scans the PCI bus.

I presume, given the description, that in the case that a Qemu stubdom
is used (as opposed to a dom0 qemu), it is not registered as the default
ioreq server, causing Xen to complete the config cycles (and incorrectly
return 0 instead of ~0)?

~Andrew

> ---
>  xen/arch/x86/hvm/hvm.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c7984d1..c826ac5 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2577,6 +2577,7 @@ static bool_t hvm_complete_assist_req(ioreq_t *p)
>      {
>      case IOREQ_TYPE_COPY:
>      case IOREQ_TYPE_PIO:
> +    case IOREQ_TYPE_PCI_CONFIG:
>          if ( p->dir == IOREQ_READ )
>          {
>              if ( !p->data_is_ptr )



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.