|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Limitations for Running Xen on KVM Arm64
On 31/10/2025 00:20, Mohamed Mediouni wrote: On 31. Oct 2025, at 00:55, Julien Grall <julien@xxxxxxx> wrote: Hi Mohamed, On 30/10/2025 18:33, Mohamed Mediouni wrote:On 30. Oct 2025, at 14:41, haseeb.ashraf@xxxxxxxxxxx wrote: Adding @julien@xxxxxxx and replying to his questions he asked over #XenDevel:matrix.org. can you add some details why the implementation cannot be optimized in KVM? Asking because I have never seen such issue when running Xen on QEMU (without nested virt enabled). AFAIK when Xen is run on QEMU without virtualization, then instructions are emulated in QEMU while with KVM, ideally the instruction should run directly on hardware except in some special cases (those trapped by FGT/CGT). Such as this one where KVM maintains shadow page tables for each VM. It traps these instructions and emulates them with callback such as handle_vmalls12e1is(). The way this callback is implemented, it has to iterate over the whole address space and clean-up the page tables which is a costly operation. Regardless of this, it should still be optimized in Xen as invalidating a selective range would be much better than invalidating a whole range of 48-bit address space. Some details about your platform and use case would be helpful. I am interested to know whether you are using all the features for nested virt. I am using AWS G4. My use case is to run Xen as guest hypervisor. Yes, most of the features are enabled except VHE or those which are disabled by KVM.Hello, You mean Graviton4 (for reference to others, from a bare metal instance)? Interesting to see people caring about nested virt there :) - and hopefully using it wasn’t too much of a pain for you to deal with. Xen already sets HCR_EL2.FB. But I believe this is only solving the problem where the vCPU is moved to another pCPU. This doesn't solve the problem where two vCPUs from the same VM is sharing the same pCPU. Per the Arm Arm each CPU have their own private TLBs. So we have to flush between vCPU of the same domains to avoid translations from vCPU 1 to "leak" to the vCPU 2 (they may have confliected page-tables). KVM has a similar logic see "last_vcpu_ran" and "__kvm_flush_cpu_context()". That said... they are using "vmalle1" whereas we are using "vmalls12e1". So maybe we can relax it. Not sure if this would make any difference for the performance though. Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |