[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] xen/arm: Handling cache maintenance instructions by set/way



>>> On 05.12.17 at 19:39, <julien.grall@xxxxxxxxxx> wrote:
> The suggested policy is based on the KVM one:
>       - If we trap a S/W instructions, we enable VM trapping (e.g 
> HCR_EL2.TVM) to detect cache being turned on/off, and do a full clean.
>       - We flush the caches on both caches being turned on and off.
>       - Once the caches are enabled, we stop trapping VM instructions.
> 
> Doing a full clean will require to go through the P2M and flush the 
> entries one by one. At the moment, all the memory is mapped. As you can 
> imagine flushing guest with hundreds of MB will take a very long time 
> (Linux timeout during CPU bring).
> 
> Therefore, we need a way to limit the number of entries we need to 
> flush. The suggested solution here is to introduce Populate On Demand 
> (PoD) on Arm.
> 
> The guest would boot with no RAM mapped in stage-2 page-table. At every 
> prefetch/data abort, the RAM would be mapped using preferably 2MB chunk 
> or 4KB. This means that when S/W would be used, the number of entries 
> mapped would be very limited. However, for safety, the flush should be 
> preemptible.

For my own understanding: Here you suggest to use PoD in order
to deal with S/W insn interception.

> To limit the performance impact, we could introduce a guest option to 
> tell whether the guest will use S/W. If it does plan to use S/W, PoD 
> will be disabled.

Therefore I'm wondering if here you mean "If it doesn't plan to ..."

Independent of this I'm pretty unclear about your conclusion that
there will be only a very limited number of P2M entries at the time
S/W insns would be used by the guest. Are you ignoring potentially
malicious guests for the moment? Otoh you admit that things would
need to be preemptible, so perhaps the argumentation is that you
simply expect well behaved guests to only have such limited amount
of P2M entries.

Am I, btw, understanding correctly that other than on x86 you
intend PoD to not be used for maxmem > memory scenarios, at
least for the time being?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.