[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address



Hello Julien, Guys,

Sorry for a delayed answer. I caught a pretty nasty flu after last long 
weekend, it made me completely unavailable last week :(

On 07.03.19 17:17, Julien Grall wrote:
Why? Arm32 is as equally supported as Arm64.
Yep, I believe that.
But I do not expect one would build arm32 based systems with many vcpus.
I have an expression arm32 do not target server applications. Whats left? 
Embedded with 4, no, ok up to 8 VMs, 8 vcpus each. How much would it cost for 
runstates mappings?

What scenario? You just need to read the implementation of the current 
hypercall and see there are nothing preventing the call to be done twice.
Ah, OK, you are about those kind of races. Yes, it looks I've overlooked that 
scenario.

When you are designing a new hypercall you have to think how a guest can misuse 
it (yes I said misuse not use!). Think about a guest with two vCPUs. vCPU A is 
constantly modifying the runstate for vCPU B. What could happen if the 
hypervisor is in the middle of context switch vCPU B?
Effects I can imagine, might be different:
 - New runstate area might be updated on Arm64, maybe partially and 
concurrently (IIRC, we have all the RAM permanently mapped to XEN)
 - Paging fault might happen on Arm32
 - Smth. similar or different might happen on x86 PV or HVM

Yet, all of them are out of design and are quite unexpected.
As I pointed

Also vcpu_info needs protections from it. Do you agree?

vcpu_info cannot be called twice thanks to the following check in map_vcpu_info:
     if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
         return -EINVAL;
Right you are.

Well the number you showed in the other thread didn't show any improvement at 
all... So please explain why we should call map_domain_page_global() here and 
using more vmap on arm32
I'm not expecting vmap might be a practical problem for arm32 based system.

Well vmap is quite small on Arm. So why should we use more of it if...

With the current implementation numbers are equal to those I have for runstate 
mapping on access.

it does not make an real improvement to the context switch? But I recall you 
said the interrupt latency were worst with keeping the runstate mapped (7000ns 
vs 7900ns).
Yes, for Roger's patch.

You also saw a performance drop when using glmark2 benchmark.
Yes, I did see it with Roger's patch. But with mine - numbers are slightly 
better (~1%) for runstate being mapped.
Also introducing more races preventing code will introduce its impact.

So how come you can say they are equal? What did change?
Nothing seems to change in context switch part, but numbers with TBM and 
glmark2 differ. I do not understand why it happens. As well as I do not 
understand why TBM shown me latency increase where I expected noticeable 
reduction.

In other words, I don't want to keep things mapped in Xen if we can achieve similar performance 
with "mapping"/"unmapping" at context switch.
I understand that clearly.

But I'm not sure my test setup able to distinguish the difference.

Well, there are a lot of stuff happening during context switch. So the benefit 
of keeping the mapping is probably lost in the noise.
Finaly we received the whole set for Lauterbach tracer. The last adapter to get 
things together arrived last week. So I really hope I would be able to get 
trustworthy measurements of those subtle things nested into context switch.

  - IMHO, this implementation is simpler and cleaner than what I have for 
runstate mapping on access.

Did you implement it using access_guest_memory_by_ipa?
Not exactly, access_guest_memory_by_ipa() has no implementation for x86. But it 
is made around that code.

For the HVM, the equivalent function is hvm_copy_to_guest_phys. I don't know 
what would be the interface for PV. Roger, any idea?
Will turn to map on access implementation as well. To have options and being 
able to compare.

--
Sincerely,
Andrii Anisov.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.