WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Instability with Xen, interrupt routing frozen, HPET bro

To: "Langsdorf, Mark" <mark.langsdorf@xxxxxxx>
Subject: RE: [Xen-devel] Instability with Xen, interrupt routing frozen, HPET broadcast
From: "Wei, Gang" <gang.wei@xxxxxxxxx>
Date: Thu, 5 May 2011 14:27:10 +0800
Accept-language: zh-CN, en-US
Acceptlanguage: zh-CN, en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Wei, Gang" <gang.wei@xxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Delivery-date: Wed, 04 May 2011 23:29:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C872CBF75AC4BE4093DF425DEA49BA0809C63E5191@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C872CBF75AC4BE4093DF425DEA49BA0809C63E5191@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcwKqwvW3Ga87kRkRM2m+5EkjC6gqQAQQVtA
Thread-topic: [Xen-devel] Instability with Xen, interrupt routing frozen, HPET broadcast
Langsdorf, Mark wrote on 2011-05-05:
> On Thu, 30 Sep 2010 14:02:34 +0800, gang.wei@xxxxxxxxx wrote:
>> I am the original developer of HPET broadcast code.
>> 
>> First of all, to disable HPET broadcast, no additional patch is required.
>> Please simply add option "cpuidle=off" or "max_cstate=1" at xen
>> cmdline in /boot/grub/grub.conf.
>> 
>> Second, I noticed that the issue just occur on pre-nehalem server
>> processors. I will check whether I can reproduce it.
>> 
>> On , xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote:
>>> Maybe you can disable pirq_set_affinity to have a try with the
>>> following patch. It may trigger IRQ migration in hypervisor, and
>>> the IRQ migration logic about(especailly shared)level-triggered
>>> ioapic IRQ is not well tested because of no users before.  After
>>> intoducing the pirq_set_affinity in #Cset21625, the logic is used
>>> frequently when vcpu migration occurs, so I doubt it maybe expose
>>> the issue you met.
>>> Besides, there is a bug in event driver which is fixed in latest
>>> pv_ops dom0, seems the dom0 you are using doesn't include the fix.
>>> This bug may result in lost event in dom0 and invoke dom0 hang
>>> eventually. To workaround this bug,  you can disable irqbalance in
>>> dom0. Good luck!
> 
> Andreas Kinzler reported seeing soft-locks and hard-locks on Xen back
> in September 2010, associated with HPET broadcast.
> 
> I'm seeing similar issues.  If I disable C-states as Jimmy suggests
> above, the problem goes away.  If I set the clocksource to pit, the problem 
> goes away.
> It may go away also if I set the clocksource to pmtimer/acpi, or if I
> remove HPET from the list of available platform timers.
> 
> Did this issue ever get resolved?  Is there a better solution that
> using pit as a clocksource?  I'd really prefer to not disable
> C-states, as the hardware I'm using gets significant performance and
> performance/watt benefits from being able to enter C2.

We could not reproduce it, so no specific fix for it by far. Have you tried the 
latest upstream?

When you mentioned clocksource, do you mean clocksource in dom0 or in xen? I do 
think hpet broadcast code have nothing to do with clocksource choice.

Jimmy



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>