[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet


  • To: "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Tue, 24 Apr 2012 19:42:05 -0700
  • Cc: Keir Fraser <keir@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Wed, 25 Apr 2012 02:42:36 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=Kme5ieCL9ebw4iYZbQkp8eGwzMgYaJ8BeMvUclovJgeO g9fquY1jBLE2fOJr/bONTa4fvKm7jgy1sITFwdMRkX14nIeV9ygZe7Dkklx7kYRM 63DBE+0uNr8JnP2aZ+kzK6PisHdp6KfqfeBoQKUSOs8HX0olRnwJKn3KS4B8lWM=
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

>> -----Original Message-----
>> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
>> Sent: Wednesday, April 25, 2012 10:31 AM
>> To: Zhang, Yang Z
>> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
>> Subject: RE: [Xen-devel] lock in vhpet
>>
>> >
>> >> -----Original Message-----
>> >> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
>> >> Sent: Wednesday, April 25, 2012 9:40 AM
>> >> To: Zhang, Yang Z
>> >> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
>> >> Subject: RE: [Xen-devel] lock in vhpet
>> >>
>> >> >> -----Original Message-----
>> >> >> From: Tim Deegan [mailto:tim@xxxxxxx]
>> >> >> Sent: Tuesday, April 24, 2012 5:17 PM
>> >> >> To: Zhang, Yang Z
>> >> >> Cc: andres@xxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir
>> >> >> Fraser
>> >> >> Subject: Re: [Xen-devel] lock in vhpet
>> >> >>
>> >> >> At 08:58 +0000 on 24 Apr (1335257909), Zhang, Yang Z wrote:
>> >> >> > > -----Original Message-----
>> >> >> > > From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
>> >> >> > > Sent: Tuesday, April 24, 2012 1:19 AM
>> >> >> > >
>> >> >> > > Let me know if any of this helps
>> >> >> > No, it not works.
>> >> >>
>> >> >> Do you mean that it doesn't help with the CPU overhead, or that
>> >> >> it's broken in some other way?
>> >> >>
>> >> > It cannot help with the CPU overhead
>> >>
>> >> Yang, is there any further information you can provide? A rough idea
>> >> of where vcpus are spending time spinning for the p2m lock would be
>> >> tremendously useful.
>> >>
>> > I am doing the further investigation. Hope can get more useful
>> > information.
>>
>> Thanks, looking forward to that.
>>
>> > But actually, the first cs introduced this issue is 24770. When win8
>> > booting and if hpet is enabled, it will use hpet as the time source
>> > and there have lots of hpet access and EPT violation. In EPT violation
>> > handler, it call get_gfn_type_access to get the mfn. The cs 24770
>> > introduces the gfn_lock for p2m lookups, and then the issue happens.
>> > After I removed the gfn_lock, the issue goes. But in latest xen, even
>> > I remove this lock, it still shows high cpu utilization.
>> >
>>
>> It would seem then that even the briefest lock-protected critical
>> section would
>> cause this? In the mmio case, the p2m lock taken in the hap fault
>> handler is
>> held during the actual lookup, and for a couple of branch instructions
>> afterwards.
>>
>> In latest Xen, with lock removed for get_gfn, on which lock is time
>> spent?
> Still the p2m_lock.

How are you removing the lock from get_gfn?

The p2m lock is taken on a few specific code paths outside of get_gfn
(change type of an entry, add a new p2m entry, setup and teardown), and
I'm surprised any of those code paths is being used by the hpet mmio
handler.

Andres

>
> yang
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.