[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: libxl / xen-pciback interaction



On 16.03.21 15:27, Jan Beulich wrote:
All,

while trying to test a pciback fix for how SR-IOV VFs get placed in the
guest PCI topology I noticed that my test VM would only ever see the 1st
out of 3 VFs that I passed to it. As it turns out, the driver watches
its own (backend) node, and upon first receiving notification it
evaluates state found in xenstore to set up the backend device.
Subsequently it switches the device to Initialised. After this switching,
not further instances of the watch triggering would do anything.

In all instances I observed the watch event getting processed when the
"num_devs" node still reported 1. Trying to deal with this in libxl, by
delaying the writing of the "num_devs" node, led to a fatal error
("num_devs" not being available to read) in the driver, causing the
device to move to Closing state. Therefore I decided that the issue has
to be addressed in the driver, resulting in a patch (reproduced below)
that I'm not overly happy with. I think the present libxl behavior is
wrong - it shouldn't trigger driver initialization without having fully
populated the information the driver is supposed to consume for its
device initialization. The only solution that I can think of, however,
doesn't look very appealing either: Instead of putting all pieces of the
data for one device in a transaction, make a single transaction cover
all devices collectively.

Any reason why you don't like this solution? Its not as if there would
be a large problem to be expected with using a single transaction for
all PCI devices passed through (assuming you didn't mean to pack really
all devices of the guest into that single transaction).


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.