[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI passthrough problems after legacy update of xen 4.1



>>> On 02.05.13 at 23:07, Andreas Falck <falck.andreas.lists@xxxxxxxxx> wrote:
> Ok, I have some progress. It tried also with the device I had always
> managed to get through, the radeon 7790 gpu. This worked equally well with
> both versions of pciif.py. However, it turned out that with the changed
> version, if I pass the gpu first in the pci = [ ... ] list, the other
> devices also gets through. This was not the case with the original version
> of pciif.py.
> 
> If (and only if) i order the passthrough list in the config file so that it
> says
> 
> pci = [ '41:00.0', '41:00.1', '04:00.0' ]
> 
> (This corresponds to GPU, HDMI audio, USB at irqs 16, 17, 19) - then
> passthrough of all devices works with the new version of pciif.py ("if
> dev.irq:"), but not with the old version ("if not self.vm.info.is_hvm() and
> dev.irq:"). So the second failure seemingly has to do with some property
> set or checked only for the first passed through device. Logs follow:

Sending xend logs here is only marginally useful, as the errors
quite certainly originate in the hypervisor. Especially considering
that the ordering of devices matters (which is quite irritating to
me), but also with the logs here now showing the -EEXIST error
that your earlier mail mentioned, we have to rely on you to help
with tracking down the root cause of this (by instrumenting the
affected hypervisor paths, i.e. extending on the debugging
patch that Andrew sent). And without you explicitly saying so
we can't even be sure there aren't (when run at maximum log
level) already messages in the hypervisor log that might provide
some further insight.

Also, please don't cross post - pick either of xen-devel or
xen-users, but not both.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.