[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Test for osstest, features used in Qubes OS



Marek / Ian,

Nice to see PCI-passthrough getting some attention again.

On 17/05/18 17:12, Ian Jackson wrote:
> Marek Marczykowski-Górecki writes ("Re: Test for osstest, features used in 
> Qubes OS"):
>> On Thu, May 17, 2018 at 01:26:30PM +0100, Ian Jackson wrote:
>>> Is it likely that this will depend on non-buggy host firmware ?  If so
>>> then we need to make arrangements to test it and only do it on hosts
>>> which are not buggy.  In practice this probably means wiring it up to
>>> the automatic host examiner.
>>
>> Yes, probably.
> 
> That's not entirely trivial then, especially for you, unless you want
> to set up your own osstest production instance.  However, I can
> probably do the osstest-machinery work if you will help debug it,
> review logs, tell me what to do next, etc. :-).
> 
>>> Is there some kind of cheap USB HID, that is interactable-with, which
>>> we could plug into each machine's USB port ?  I'm slightly concerned
>>> that plugging in a storage device, or connecting the other NIC, might
>>> interfere with booting.
>>
>> I use mass storage for tests... But if you use network boot, it
>> shouldn't really interfere, no?
> 
> We do both network boot and disk boot.  I think the BIOS disk boot has
> to continue to work and boot the HDD.

As a user of pci-passthrough for quite some time and reporting some 
pci-passthrough bugs in the past,
I do have some comments:

- First of all it would be very nice to get some autotesting :).
- But if you want to thoroughly test pci-passthrough, 
  it will be far from easy since there is quite a multi-dimensional support 
matrix
  (I'm not implying that everything should be done or it won't be valuable if 
any is missing,
   it's only meant for reference):
  1) Guest side implementation: 
     - PV guest (pcifront)
     - HVM (qemu-traditional) 
     - HVM (qemu-xen) 
     - HVM (qemu-upstream) 
     - perhaps PVH support for pci passthrough coming around the corner.

  2) (Un)Binding method to pciback:
     - binding pci devices to pciback on host boot (command line) 
     - de/re/unbinding devices from dom0 while running.
 
  3) (Un)binding to guest:
     - On guest start (guest.cfg pci=[...])
     - After the guest has been started with 'xl pci-*' commands
  3) Device interrupts: legacy versus MSI versus MSI-X
  4) Other pci device features: roms, BAR sizes, etc.
  5) AMD versus Intel IOMMU

From the past reports, I know (1) and (3) did matter (problems being isolated 
to one of these variants only).


As for restarting guests and reassigning pci-devices again to other guests the 
current pciback reset support lacks
the bus-reset patches at present in upstream linux kernels. Passthrough of AMD 
Radeon graphics adapters works only one
time without it (if you stop and restart a guest it doesn't work anymore and 
you need to reboot the host). 
With the bus-reset patches (which have been posted to the list and seem to be 
in both Qubes and Xenserver 
in some form but not in upstream linux). Someone from Oracle had picked them up 
to get them upstream some time ago,
but that effort seems to have stalled.

The code in libxl seems to be quite messy for pci-passthrough especially for 
handling all the guest side implementations (1)
and xenstore interactions that go with it (or don't for qemu).

--
Sander

 
>>> If you want to get pci passthrough tests working I would suggest
>>> testing it with non-stubdom first.  I assume the config etc. is the
>>> same, so having got that working, osstest would be able to test it for
>>> the stubdom tests too.
>>
>> Oh, I though there are already tests for that...
> 
> There are no PCI passthrough tests at all.  For a while we had some
> SRIOV NIC tests which were requested by Intel.  But they always failed
> giving kernel stack dumps.  We kept poking Intel to get them to fix
> them, or tell us how the tests were wrong, but to no avail.  So we
> dropped them.
> 
> So any work in this area would be greatly appreciated!
> 
> Ian.
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.