[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] xen/arm: Handling cache maintenance instructions by set/way



On 12/11/2017 11:10 AM, Andre Przywara wrote:
Hi,

Hi Andre,

But on the other hand we had PoD naturally already in KVM, so this came
at no cost.
So I believe it would be worth to investigate what the actual impact is
on booting a 32-bit kernel, with emulating s/w ops like KVM does (see
below), but cleaning the *whole VA space*. If this is somewhat
acceptable (I assume we have no more than 2GB for a typical ARM32
guest), it might be worth to ignore PoD, at least for now and to solve
this problem (and the IOMMU consequences).

I am fairly surprised you think I came up with this solution without any investigation. I actually clearly stated it in my first e-mail that Linux is not able to bring up CPU with a flush of the "whole VA space".

At the moment, Linux 32-bit as a 1 second timeout to bring up a secondary CPU. In that second we need to do at least a full flush (I think there are a second). In the case of Xen Arm32, the domain heap (where domain memory belongs) is not mapped in the hypervisor. So you end up to do mapping for every page-table and final memory. To that, you add the cost of doing cache maintenance. Then, you finally add the potential cost preemption (vCPU might be schedule out).

During my initial investigation, I was not able to boot Dom0 with 512MB. I tried to optimize the mapping path, but it didn't show much improvement in general.

Regarding the IOMMU consequences, S/W ops are not easily virtualizable. If you use them, then it is the price to pay. It is better than not been able to boot current kernel or randomly crashing.

Cheers,

--
Julien Grall,

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.