|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] performace issue when turn on apicv
Hi,
I'm John's colleague. We looked into the details of the tracing data, and found
that the number of MSR_IA32_APICTMICT_MSR
event is quite high when apic-v is enabled(about 9x more compared with apic-v
disabled).
Below is the details:
EXIT_REASON_MSR_WRITE
apicv on:
MSR= 0x00000838(MSR_IA32_APICTMICT_MSR) count= 111480
MSR= 0x00000830(x2APIC Interrupt Command Register) count= 350
Total count = 111830
apicv off:
MSR= 0x00000838(MSR_IA32_APICTMICT_MSR) count= 13595
MSR= 0x00000830(x2APIC Interrupt Command Register) count= 254
MSR= 0x0000080b(MSR_IA32_APICEOI_MSR) count= 215760
Total count = 229609
If there is anything need to get, pls let me know.
Thanks.
Regards
Yifei jiang
-----Original Message-----
Date: Thu, 18 Jun 2015 09:20:39 +0100
From: "Jan Beulich" <JBeulich@xxxxxxxx>
To: "Liuqiming (John)" <john.liuqiming@xxxxxxxxxx>
Cc: Yang Z Zhang <yang.z.zhang@xxxxxxxxx>,
"xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>,
"peter.huangpeng@xxxxxxxxxx" <peter.huangpeng@xxxxxxxxxx>
Subject: Re: [Xen-devel] performace issue when turn on apicv
Message-ID: <55829B770200007800086752@xxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=US-ASCII
>>> On 18.06.15 at 10:02, <john.liuqiming@xxxxxxxxxx> wrote:
> When using FIO to test the performance of SSD passthroughed in vm the
> result show that: When the apicv is on, each EXIT_REASON_MSR_WRITE event
> spent more time than apicv is off.
>
> Following is the xentrace result:
>
> apicv on:
>
> VMExitCode VMExitReason VMExitCnt
> VMExitTicks VMExitTicks/VMExitCnt
>
> 0x0000000001 EXIT_REASON_EXTERNAL_INTERRUPT 270334
> 2730912532 10101.99432
> 0x0000000012 EXIT_REASON_VMCALL 20
> 438736 21936.8
> 0x000000001c EXIT_REASON_CR_ACCESS 381340
> 1096174160 2874.532333
> 0x000000001e EXIT_REASON_IO_INSTRUCTION 413
> 32958356 79802.31477
> 0x0000000020 EXIT_REASON_MSR_WRITE 111830
> 818317724 7317.515193
> 0x000000002d EXIT_REASON_EOI_INDUCED 58944
> 234914864 3985.390608
> 0x0000000030 EXIT_REASON_EPT_VIOLATION 10
> 298368 29836.8
>
> Total 822891
> 4914014740
>
> apicv off:
>
> VMExitCode VMExitReason VMExitCnt
> VMExitTicks VMExitTicks/VMExitCnt
>
> 0x0000000001 EXIT_REASON_EXTERNAL_INTERRUPT 237100
> 2419717824 10205.47374
> 0x0000000007 EXIT_REASON_PENDING_VIRT_INTR 792
> 2324824 2935.383838
> 0x0000000012 EXIT_REASON_VMCALL 19
> 415168 21850.94737
> 0x000000001c EXIT_REASON_CR_ACCESS 406848
> 1075393292 2643.231113
> 0x000000001e EXIT_REASON_IO_INSTRUCTION 413
> 39433068 95479.58354
> 0x000000001f EXIT_REASON_MSR_READ 48
> 150528 3136
> 0x0000000020 EXIT_REASON_MSR_WRITE 229609
> 1004000084 4372.651264
> 0x0000000030 EXIT_REASON_EPT_VIOLATION 10
> 249172 24917.2
>
> Total 874839
> 4541683960
And did you drill into _which_ MSR(s) are requiring this much longer to
have their writes handled? After all, that's the relevant thing, provided
the increase of this indeed has anything to do with the performance
issue you're seeing (the absolute increase of 200M ticks there doesn't
mean much for the performance effect without knowing what the total
execution time was).
Apart from that I notice that the EXIT_REASON_EOI_INDUCED handling
also adds about the same number of ticks...
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |