[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] BUG: scheduling while atomic: xenwatch



On 15 July 2011 01:46, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
>
> For the bugs you outlined, you might want to try 4.1.1 just to double-check.
>
> I've had some strange issues with  Xen-Unstable that I hadn't tracked down.

I remember trying 4.1.1 before (that's the released version right?),
but PCI passthrough worked very badly (frequent BSODs, passthrough
Intel audio will go crazy after a while). Will give it another try on
my next reboot though.

>>
>>
>> There are still some weird quirks around:
>> 1. HVM Windows 7 with PCI passthrough refuses to shutdown, somehow
>> qemu treats it as a domain reboot. If I do a xl destroy, the whole
>> system reboots (not sure how I can find out what happens though since
>> my mainboard does not have a hardware serial port. Can xen log to a
>> file?)

Just finished some testing. Seems like the system will reboot as long
as that particular HVM is shut down for the second time. So the system
reboot and the HVM not shutting down are (maybe) two separate
problems.

>
> You can buy a PCI/PCIe type serial card. That is what I am using for
> the non-serial supplied boxes.

Alright, just purchased one off eBay.

>>
>> 2. With 16G of memory and Dom0 memory set to 1G, trying to start the
>> above 8G Windows 7 HVM while any other VM is running (I tried it with
>> one VM using 1G) causes some bug trace to occur (haven't had the
>> chance to copy that one down) and qemu starts but does nothing. Doing
>> xl destroy will cause 1. to occur.
>
> This is with PCI passthrough or without?

With PCI passthrough. But using today's kernel+xen combination, it
seemed to have disappeared.

>>
>> 3. On high (full) CPU and disk utilisation, the whole system will
>> sometimes reboot.
>
> That is .. not good. I think you need to buy that PCI card so
> we can get to the bottom of this.

OK, will provide more details once the card arrives.

>>
>> 4. Somehow with this new kernel/xen combination, my pfSense domain
>> does not receive DHCP requests sent from other domains, requests from
>> other computers in the network outside of xen are received though. Non
>> broadcast traffic works though.

Solved with today's kernel+xen combination as well.

>>
>> 5. The network performance of this kernel/xen combination compared to
>> before is almost half.
>
> What is "before" ?

That will be the kernel+xen combinations in the original bug report on
this thread and prior to that:
# hg log|head
changeset:   23632:33717472f37e
tag:         tip
user:        Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
date:        Tue Jun 28 18:15:44 2011 +0100
summary:     libxc: Squash xc_e820.h (and delete) into xenctrl.h

Kernel:
# git log|head
commit 3dc33d5af47bd3e216f71f47f1f4c7ea4578cca4
Merge: bd687c5 bccaeaf
Author: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
Date:   Thu Jun 23 17:10:21 2011 -0700

>>
>> 6. The WAN bridge to my pfSense appliance goes down (pings suddenly
>> stop) after a while. Rebooting the pfSense domain restores it for a
>
> Do you see any messages in Dom0 about the NIC going offline? IF you run
> udevadm monitor --kernel --udev --property do you see anything showing
> up when the bridge goes down?

Nope, nothing happened at all. I'm going to try passing-through the
WAN interface into pfSense, that should ascertain whether xen is
directly involved or not.

>
>> while. Removing and re-adding the domain's tap interface to/from the
>> bridge solves it permanently for the domain session. This has always
>> been a problem, not sure where the bug is originating from though
>> since different versions and combinations of Debian/kernel/xen/pfSense
>> has never solved it. And no indication of the problem occurring except
>> all WAN traffic stops.
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.