[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet



> -----Original Message-----
> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
> Sent: Wednesday, April 25, 2012 9:40 AM
> To: Zhang, Yang Z
> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
> Subject: RE: [Xen-devel] lock in vhpet
> 
> >> -----Original Message-----
> >> From: Tim Deegan [mailto:tim@xxxxxxx]
> >> Sent: Tuesday, April 24, 2012 5:17 PM
> >> To: Zhang, Yang Z
> >> Cc: andres@xxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir
> >> Fraser
> >> Subject: Re: [Xen-devel] lock in vhpet
> >>
> >> At 08:58 +0000 on 24 Apr (1335257909), Zhang, Yang Z wrote:
> >> > > -----Original Message-----
> >> > > From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
> >> > > Sent: Tuesday, April 24, 2012 1:19 AM
> >> > >
> >> > > Let me know if any of this helps
> >> > No, it not works.
> >>
> >> Do you mean that it doesn't help with the CPU overhead, or that it's
> >> broken in some other way?
> >>
> > It cannot help with the CPU overhead
> 
> Yang, is there any further information you can provide? A rough idea of where
> vcpus are spending time spinning for the p2m lock would be tremendously
> useful.
> 
I am doing the further investigation. Hope can get more useful information. 
But actually, the first cs introduced this issue is 24770. When win8 booting 
and if hpet is enabled, it will use hpet as the time source and there have lots 
of hpet access and EPT violation. In EPT violation handler, it call 
get_gfn_type_access to get the mfn. The cs 24770 introduces the gfn_lock for 
p2m lookups, and then the issue happens. After I removed the gfn_lock, the 
issue goes. But in latest xen, even I remove this lock, it still shows high cpu 
utilization.

yang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.