[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Instability with Xen, interrupt routing frozen, HPET broadcast



On Thu, 30 Sep 2010 14:02:34 +0800, gang.wei@xxxxxxxxx wrote:
> I am the original developer of HPET broadcast code. 
>
> First of all, to disable HPET broadcast, no additional patch is required. 
> Please simply add option "cpuidle=off" or "max_cstate=1" at xen cmdline in 
> /boot/grub/grub.conf. 
> 
> Second, I noticed that the issue just occur on pre-nehalem server processors. 
> I 
> will check whether I can reproduce it. 

> On , xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote:
> > Maybe you can disable pirq_set_affinity to have a try with the
> > following patch. It may trigger IRQ migration in hypervisor,
> > and the IRQ migration logic about(especailly
> > shared)level-triggered ioapic IRQ is not well tested because
> > of no users before.  After intoducing the pirq_set_affinity in
> > #Cset21625, the logic is used frequently when vcpu migration
> > occurs, so I doubt it maybe expose the issue you met.
> > Besides, there is a bug in event driver which is fixed in
> > latest pv_ops dom0, seems the dom0 you are using doesn't
> > include the fix.  This bug may result in lost event in dom0
> > and invoke dom0 hang eventually. To workaround this bug,  you
> > can disable irqbalance in dom0. Good luck!

Andreas Kinzler reported seeing soft-locks and hard-locks on Xen
back in September 2010, associated with HPET broadcast.

I'm seeing similar issues.  If I disable C-states as Jimmy suggests
above, the problem goes away.  If I set the clocksource to pit, the
problem goes away.  It may go away also if I set the clocksource to
pmtimer/acpi, or if I remove HPET from the list of available platform
timers.

Did this issue ever get resolved?  Is there a better solution that
using pit as a clocksource?  I'd really prefer to not disable 
C-states, as the hardware I'm using gets significant performance
and performance/watt benefits from being able to enter C2.

--Mark Langsdorf
Operating System Research Center
AMD


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.