|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] GPU passthrough performance regression in >4GB vms due to XSA-60 changes
On 05/15/2014 02:32 PM, Jan Beulich wrote: Yes, indeed it is very possible it has been only accidentally working properly before. Thanks, I wasn't aware about further work in this area in 4.5, will look.On 15.05.14 at 11:11, <tomasz.wroblewski@xxxxxxxxx> wrote: Note that I'm not talking about slow performance during the window the CR0 has caching disabled, it does stays slow even after guest reenables it shortly after since the problem seems to be a side effect of removed loop setting some default EPT policies on all pfns. Reintroducing the removed loop fixes the problem.Doing so is clearly not going to be an option. Yes, was merely stating how things are. Yeah I have dumped EPT memory types on affected ranges before and after change. Before the change we were getting writeback, and ultimately from debugging that value was originating from mtrr.c:get_mtrr_type function (called by epte_get_entry_emt()), specifically the "return m->def_type" statement in there so it seems it was just going off some default in case the range was not specified in mtrr. After the change, it stays as UC.To at least harden your suspicion, did you look at D debug key output with and without the change (for that to be useful here you may want to add memory type dumping as was added in -unstable). Not really sure why it only affects 64bit vms but I've just noticed the pci BARs for the card are being relocated by hvmloader as per some logs: (XEN) HVM3: Relocating guest memory for lowmem MMIO space enabled(XEN) HVM3: Relocating 0xffff pages from 0e0001000 to 14dc00000 for lowmem MMIO hole (XEN) HVM3: Relocating 0x1 pages from 0e0000000 to 15dbff000 for lowmem MMIO hole So it might be also related to that. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |