[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu



On Tue, 25 Jul 2017 15:13:17 +0100
Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> wrote:
> >> The algorithm implemented in hvmloader for that is not complicated and
> >> can be moved to libxl easily. What we can do is to provision a hole big
> >> enough to include all the initially assigned PCI devices. We can also
> >> account for emulated MMIO regions if necessary. But, to be honest, it
> >> doesn't really matter since if there is no enough space in lower MMIO
> >> hole for some BARs - they can be easily relocated to upper MMIO
> >> hole by hvmloader or the guest itself (dynamically).
> >>
> >> Igor  
> > [Zhang, Xiong Y] yes, If we could supply a big enough mmio hole and
> > don't allow hvmloader to do relocate, things will be easier. But how
> > could we supply a big enough mmio hole ? a. statical set base address
> > of mmio hole to 2G/3G. b. Like hvmloader to probe all the pci devices
> > and calculate mmio size. But this runs prior to qemu, how to probe pci
> > devices ? 
> 
> It's true that we don't know the space occupied by emulated device
> before QEMU is started.  But QEMU needs to be started with some lower
> MMIO hole size statically assigned.
> 
> One of the possible solutions is to calculate a hole size required to
> include all the assigned pass-through devices and round it up to the
> nearest GB boundary but not larger than 2GB total. If it's not enough to
> also include all the emulated devices - it's not enough, some of the PCI
> device are going to be relocated to upper MMIO hole in that case.

Not all devices are BAR64-capable and even those which are may have Option
ROM BARs (mem32 only). Yet there are 32-bits guests who will find 64-bit
BARs with values above 4GB to be extremely unacceptable. Low MMIO hole is a
precious resource. Also, one need to consider implications of PCI device
hotplugging against the 'static' precalculation approach.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.