[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCIe devices that are hotplugged after MMIO has been setup fail due to _CRS not covering 64-bit area



On Wed, Sep 28, 2016 at 03:21:08AM -0600, Jan Beulich wrote:
> >>> On 27.09.16 at 16:43, <konrad.wilk@xxxxxxxxxx> wrote:
> > If the guest is booted with 'pci' we nicely expand the MMIO region below
> > 4GB and try to fit in the BARs in there. If that fails (not enough
> > space) we move it above the memory (64-bit). And throughout all of this
> > we also update the _CRS field to cover these ranges.
> > 
> > (Note, I need to check if the 64-bit area is also set, I think it is).
> > 
> > But the situation is different if we hot-plug a device that has too big
> > BAR to fit in the MMIO region. We move it in the 64-bit area but we
> > don't update the _CRS. Which means that Linux will complain (unless
> > booted with pci=nocrs)). Not sure about Windows but I would assume so
> > to.
> > 
> > I was wondering what would be a good way to solve this? I looked at some
> > Dell machines to see how they deal with hotplug PCIe devices and they
> > just declared all the memory in the _CRS (including RAM).
> > 
> > We could do a hybrid - during bootup make the _CRS region have entry from
> > end of RAM to .. end of memory?
> 
> End of physical address space you mean? Generally yes, but we
> need to be a little careful there: For one, on AMD we'd better not
> overlap with the HT area. And then there's this MTRR related
> comment next to the setting of pci_hi_mem_end (albeit both HT
> area start and end of PA space should be aligned well enough).
> 
> > Or perhaps add some extra logic between QEMU and ACPI AML to expand (or
> > perhaps modify the last _CRS entry) when PCIe devices are hotplugged?
> 
> While that would be the most flexible variant, I'd be afraid of this
> getting rather complicated. Or have you already got some
> reasonable layout of how this would look like?

I did this and while all the plumbing works great and I can see that
the pci_hi_len gets incremented by the size of the 64-bit BARS of the
new device (and also decremented if hot-unplugged) I hit a snag:

Linux evaluates this only once (actually twice, but only during bootup).

That is if I did the hotplug when the guest is in GRUB and boot
Linux is quite happy. But if I did it after Linux has booted the
PNP0A03 _CRS is not evaluated again.

The only way I can see it evaulating this is if a new bridge
is added and DMAR hotplug support ("Remapping Hardware Unit Hot Plug")
is exposed to the guest. See in Linux code acpi_pci_root_add and
if (hotadd && dmar_device_add(handle))

This means: 
- adding in QEMU bridge support for each new hotplugged device, 
- and Intel VT-d in the guest support.

That I think will take a bit of time to get right.

For right now let me jump with the "simpler" solution of just
hardcoding the end of physical address space and see how that works out.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.