[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Spice-devel] Vdagent not working on xen linux hvm DomUs



Il 11/12/2013 16:38, Wei Liu ha scritto:
On Wed, Dec 11, 2013 at 02:41:57PM +0100, Fabio Fantoni wrote:
[...]
Thanks for your reply.
Before starting bisection I tried with qemu 1.3.1 from
qemu-upstream-4.3-testing.git
No more crash with virtio net but it needs pci=nomsi to be
working, same thing for vdagent, so seems that msi problem is with
all virtio devices.
Then the problems seem 2 different, on your build you have virtio
devices working without setting pci=nomsi need to know the
differences and find the cause.
Your test with virtio net working without pci=nomsi was on ovmf only
or you tried also with seabios?
I only tried OVMF recently and since you were replying to this thread I
presumed you used OVMF as well.

I not tried ovmf for this case, I'll do.
Based on your post seem that msi problem with virtio is missed on ovmf.


I tested with Ubuntu Saucy and Ubuntu Precise, both with latest
xen-unstable (based on commit
2f718161bc292bfbdf1aeefd3932b73c0965373d), latest commit of
If you're using OVMF I suggest you update your branch to he latest
master.

qemu-upstream-4.3-testing.git and latest stable seabios from debian
package 1.7.3-2
So that you're not using seabios tree from xenbits?

debian package use upstream 1.7.3.2 version, so I not think do difference.


On both case pci=nomsi was needed to have virtio net working.

I watch the pdf of virtio spec. of this post:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg01654.html
however, are not able to understand the possible cause of the
problem encountered with msi on virtio devices with xen.

That's not very relevant. The bug is in implementation.

About the crash of qemu 1.6.1 with virtio net is confirmed that is
a regression, is not critical because is not implement on libxl
now but I'll do further research.


I test with qemu 1.4 and 1.5 and they haven't the regression showing
xenmap cache error with virtio net.
So you've found out the bug was introduced in 1.6, good.

Watching history seems there aren't commits about xen mapcache
between 1.5 and 1.6, other xen and virtio changes are many, from a
quick look I could not find commit suspects to be tested.
Someone can suggest me the commits more suspects to be testedplease?

How many commits between 1.5 and 1.6? If it is not too many I think
doing bisection would be a good idea.

Wei.

I did other tests but take very long time due to a some of other bugs fixes later.

the results for now are:

regression present on commit c9fea5d701f8fd33f0843728ec264d95cee3ed37 Mon, 22 Jul 2013 15:14:18 (Merge remote-tracking branch 'bonzini/iommu-for-anthony') plus cherry-pick of commit 755ec4ca0f92188458ad7ca549a75161cbdcf6ff pc: Initializing ram_memory under Xen.

qemu crash on hvm domU start for other problem of which I have not found the fix for now: commit dcb117bfda5af6f6ceb7231778d36d8bce4aee93 Thu, 4 Jul 2013 15:42:46 +0000 (ne2000: pass device to ne2000_setup_io, use it as owner) plus cherry-pick of commit 755ec4ca0f92188458ad7ca549a75161cbdcf6ff pc: Initializing ram_memory under Xen.

xl dmesg:
(d1) Multiprocessor initialisation:
(d1)  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1)  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1) Testing HVM environment:
(d1) - REP INSB across page boundaries ... Bad value at 0x00500000: saw 98765400 ex
(d1) pected 987654ff
(d1) Bad value at 0x00500ffc: saw 00000000 expected ff000000
(d1) Bad value at 0x005ffffc: saw 00000000 expected ff000000
(d1) Bad value at 0x00601000: saw 00000000 expected 000000ff
(d1) failed
(d1)  - GS base MSRs and SWAPGS ... passed
(d1) Passed 1 of 2 tests
(d1) FAILED 1 of 2 tests
(d1) *** HVMLoader bug at tests.c:242
(d1) *** HVMLoader crashed.




Thanks for any reply.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.