[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] PCIe devices that are hotplugged after MMIO has been setup fail due to _CRS not covering 64-bit area



Hey!

If the guest is booted with 'pci' we nicely expand the MMIO region below
4GB and try to fit in the BARs in there. If that fails (not enough
space) we move it above the memory (64-bit). And throughout all of this
we also update the _CRS field to cover these ranges.

(Note, I need to check if the 64-bit area is also set, I think it is).

But the situation is different if we hot-plug a device that has too big
BAR to fit in the MMIO region. We move it in the 64-bit area but we
don't update the _CRS. Which means that Linux will complain (unless
booted with pci=nocrs)). Not sure about Windows but I would assume so
to.

I was wondering what would be a good way to solve this? I looked at some
Dell machines to see how they deal with hotplug PCIe devices and they
just declared all the memory in the _CRS (including RAM).

We could do a hybrid - during bootup make the _CRS region have entry from
end of RAM to .. end of memory?

Or perhaps add some extra logic between QEMU and ACPI AML to expand (or
perhaps modify the last _CRS entry) when PCIe devices are hotplugged?

I am wondering what folks think is the best way going forward?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.