[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multiple IRQ's in HVM for Windows


  • To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Sat, 27 Dec 2008 10:39:19 +0000
  • Cc:
  • Delivery-date: Sat, 27 Dec 2008 02:39:47 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclnCjPkeAYvm5WbTaqM+PmGGNFiHQAIdYQ3AAU6BeAAATbLsQAADYFAAADLT9kALf9OEAABLL7gAAAL4JAAAN6snAAABoKQAADccJAAAAwiEAAAhgyJ
  • Thread-topic: [Xen-devel] Multiple IRQ's in HVM for Windows

On 27/12/2008 10:28, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:

> Well... the 'old' way would probably still have to work (or would it?),
> so we could just keep allocating IRQ's until we run out and any leftover
> devices just have to use the old way.

Yes, that did occur to me. Might be a nice fallback while still allowing up
to 16 or whatever devices to have their interrupts distributed across VCPUs.
The old mechanism does still need to work, so making it a fallback in this
new mechanism would be probably not too difficult.

> I've mentioned the possibility of using MSI before... would that work?
> I'm not yet sure if they are supported across all windows versions, but
> we get lots more 'interrupt channels'...

Well, would Windows need to see more fake PCI devices (where these MSIs
would emanate from) for this to work? It would be nice, though perhaps not
essential, to avoid this since it needs backwards-compatible changes to
qemu-dm, and also possibly our vBIOS.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.