[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: xenstore pci global parameters



On Wed, 2 Dec 2009, Qing He wrote:
> Hi Stefano,
>       I noticed that the MSI-INTx translation doesn't work in
> recent xen-unstable. After some investigation, it tracks down to
> the change in changeset 20348. And with the fix from 20397, the
> pci devclass is now created at the time of the first pci device
> assignment, either it's booting device or hotplug device.
> 
>       However, one of the problem is that pci devclass provides
> pci global parameters, for example, pci_msitranslate and
> pci_power_mgmt. The device model used to get the value of these
> global parameters at init time, but now, it won't see anything
> since the pci block in xenstore doesn't exist at that time.
> So the problem now is that these parameters are bypassed and
> the guest can't benefit from it. I can add some flags in qemu
> similar to first_dev, but that looks a little weird.

If I am not mistaken xend keeps an internal record of those flags so it
could use that info to set per device default flags:

xenstore-write opts-$DEVNUM msitranslate=$MSITRANS,power_mgmt=$POWERMGM


>       I'm not very familiar with stubdom initialization, so
> I'd like to know how the pci devclass is used in the stubdom,
> and why there is some circular dependency?
> 

The problem was introduced by 19679, "xend: hot-plug PCI devices at
boot-time": 

- xend is creating the guest domain
- as part of this process xend forks and execs stubdom-dm
- afterward xend hotplugs a pci device into the guest
- then waits for the device model to say it is done (signalDeviceModel)
  but this never happens because stubdom-dm is trying to create the
  stubdom but xend hasn't replied yet because is busy with the guest
  domain.

This problem doesn't exist anymore.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.