[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address





On 07/03/2019 15:17, Julien Grall wrote:
Hi Andrii,

On 07/03/2019 14:34, Andrii Anisov wrote:
On 07.03.19 16:02, Julien Grall wrote:
So I assume you say about you preferences to not have runstate area mapped because of consuming vmap space for arm64. Also, along that thread you mentioned you that guest might change gva mapping, what is irrelevant to registration with physical address.

My reasons to have that runstate mapped are following:
  - Introducing the new interface we are not burden with legacy, so in charge to impose requirements. In this case - to have runstate area not crossing a page boundary   - Global mapping used here does not consume vmap space on arm64. It seems to me x86 guys are ok with mapping as well, at least Roger suggested it from the beginning. So it should be ok for them as well.
You left arm32 out of your equations here...
Yes, I left arm32 aside.

Why? Arm32 is as equally supported as Arm64.


  - In case domain is mapping runstate with physical address, it can not change the mapping.

This is not entirely correct. The domain can not change the mapping under our feet, but it can still change via the hypercall. There are nothing preventing that with current hypercall and the one your propose.
Could you please describe the scenario with more details and the interface used for it?

What scenario? You just need to read the implementation of the current hypercall and see there are nothing preventing the call to be done twice.

When you are designing a new hypercall you have to think how a guest can misuse it (yes I said misuse not use!). Think about a guest with two vCPUs. vCPU A is constantly modifying the runstate for vCPU B. What could happen if the hypervisor is in the middle of context switch vCPU B?

As I pointed

Hmmm this is a left-over of a sentence I was thinking to add but dropped.

Another use case to think about is multiple vCPUs trying to register a runstate concurrently for the same vCPU.


Also vcpu_info needs protections from it. Do you agree?

vcpu_info cannot be called twice thanks to the following check in map_vcpu_info:
     if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
         return -EINVAL;


Well the number you showed in the other thread didn't show any improvement at all... So please explain why we should call map_domain_page_global() here and using more vmap on arm32
I'm not expecting vmap might be a practical problem for arm32 based system.

Well vmap is quite small on Arm. So why should we use more of it if...

With the current implementation numbers are equal to those I have for runstate mapping on access.

it does not make an real improvement to the context switch? But I recall you said the interrupt latency were worst with keeping the runstate mapped (7000ns vs 7900ns). You also saw a performance drop when using glmark2 benchmark. So how come you can say they are equal? What did change?

In other words, I don't want to keep things mapped in Xen if we can achieve similar performance with "mapping"/"unmapping" at context switch.

But I'm not sure my test setup able to distinguish the difference.

Well, there are a lot of stuff happening during context switch. So the benefit of keeping the mapping is probably lost in the noise.


  - IMHO, this implementation is simpler and cleaner than what I have for runstate mapping on access.

Did you implement it using access_guest_memory_by_ipa?
Not exactly, access_guest_memory_by_ipa() has no implementation for x86. But it is made around that code.

For the HVM, the equivalent function is hvm_copy_to_guest_phys. I don't know what would be the interface for PV. Roger, any idea?

Cheers,


--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.