[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] trigger an interrupt in HVM


  • To: "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Thu, 21 Aug 2008 12:32:21 +1000
  • Cc:
  • Delivery-date: Wed, 20 Aug 2008 19:32:44 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AckDI/dJ8Vz6w8p3QGimRDvwpEL7eQADR9aQAAERcdA=
  • Thread-topic: [Xen-devel] trigger an interrupt in HVM

> What's the reason to try this? :-)
> The interrupt injection into HVM guest doesn't use
evtchn_upcall_pending -
> - that's for PV guest.
> Maybe you can refer to tools/ioemu/hw/rtl8139.c to see how the rtl8139
> device model injects interrupt into hvm guest (pls see pci_set_irq()).

I should have been more clear. This is from inside the DomU with the
GPLPV drivers.

The various subsystems (xennet, xenvbd) hook onto the same IRQ as the
PCI device. When the PCI device gets an IRQ, signalling that an event
channel has been set, and the event channel is for one of the subsystem
devices, it tells windows that the IRQ wasn't handled, so windows then
tries the other devices.

Windows makes it very tricky to get 'into' the scsiport context, and an
interrupt is the easiest way to do this, so if the PCI device driver
wants to tell the vbd driver to prepare for a suspend, setting a flag in
some shared data and triggering an interrupt is a good way to do this.
The alternative is a scsiport timer that polls the shared data. The
latter works but is a bit ugly.

Previously, I was attaching the subsystem drivers to fake IRQ's (eg in
the range 31-47) and using the asm 'int x' instruction to call them.
This is really bad for performance and caused a few other problems too.

Thanks

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.