[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Test report of xen 4.2.0-rc4



Il 07/09/2012 17:15, Ian Campbell ha scritto:
Thanks for testing.

On Fri, 2012-09-07 at 13:44 +0100, Fabio Fantoni wrote:
- IMPORTANT - On restore network is up but not working, tried with W7
pro 64 bit with gplpv last build (357) on qemu-xen-traditional
Our automated tests aren't seeing this, might it be a GPLPV issue? Can
you reproduce with e.g. PV Linux (or PVHVM Linux for that matter)?

- Cdrom hotswap is not working, tried with W7 pro 64 bit with gplpv last
build (357) on qemu-xen-traditional
xl -vvv cd-eject W7 hdb
libxl: debug: libxl.c:2143:libxl_cdrom_insert: ao 0x1461980: create:
how=(nil) callback=(nil) poller=0x14619e0
Errore di segmentazione
This doesn't happen for me. Please can you run this one under gdb and
when it fails type "bt" to get a backtrace.

- Vnc is working but only with parameters not supplied as value to the
vfb key, but with vfb key is not working
Whether or not that works is very much a function of exactly what the
rest of your guest config looks like (it depends on PV for HVM for one
thing).

You've reported enough bugs now that I shouldn't need to remind you
about providing guest config files or link to
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen again.

Ian.




-----
Nessun virus nel messaggio.
Controllato da AVG - www.avg.com
Versione: 2012.0.2197 / Database dei virus: 2437/5254 -  Data di rilascio: 
07/09/2012


Thanks for reply, here the xl configuration file:

-------------------------
W7.cfg
------
name='W7'
builder="hvm"
memory=2048
vcpus=2
vif=['bridge=xenbr0']
#vfb=['vnc=1,vncunused=1,vnclisten="0.0.0.0",keymap=it']
disk=['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/mnt/vm/iso/XPSP3PRO.iso,raw,hdb,ro,cdrom']
#disk=['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cdrom']
boot='cd'
device_model_version="qemu-xen-traditional"
vnc=1
vncunused=1
vnclisten="0.0.0.0"
keymap="it"
#on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
stdvga=1
-------------------------

I don't know exactly how to use gdbsx, tried with:
gdbsx -a 1 64 9999
Listening on port 9999
doesn't show nothing also after cd-eject

I not found howto about gdbsx, can you tell me how to use it please?

xl -vvv cd-eject W7 hdb
libxl: debug: libxl.c:2143:libxl_cdrom_insert: ao 0x7b4980: create: how=(nil) callback=(nil) poller=0x7b49e0 libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdb, backend phy unsuitable as phys path not a block device libxl: debug: libxl_device.c:210:disk_try_backend: Disk vdev=hdb, backend tap unsuitable due to format empty libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdb, using backend qdisk libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x7b4980: complete, rc=0 libxl: debug: libxl.c:2236:libxl_cdrom_insert: ao 0x7b4980: inprogress: poller=0x7b49e0, flags=ic
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x7b4980: destroy
xc: debug: hypercall buffer: total allocations:4 total releases:4
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:1 misses:2 toobig:1

This time domU seem also freeze

-------------------------
qemu-dm-W7.log
----------
domid: 1
Using file /dev/xen/blktap-2/tapdev0 in read-write mode
Using file /dev/xen/blktap-2/tapdev1 in read-only mode
Watching /local/domain/0/device-model/1/logdirty/cmd
Watching /local/domain/0/device-model/1/command
Watching /local/domain/1/cpu
qemu_map_cache_init nr_buckets = 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid = dfc1d327-23c4-4cf1-9683-4b92a7ca2eb1
populating video RAM at ff000000
mapping video RAM from ff000000
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
xs_read(/local/domain/0/device-model/1/xen_extended_power_mgmt): read error
xs_read(): vncpasswd get error. /vm/dfc1d327-23c4-4cf1-9683-4b92a7ca2eb1/vncpasswd.
medium change watch on `hdb' (index: 1): /dev/xen/blktap-2/tapdev1
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Log-dirty: no command yet.
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
vcpu-set: watch node error.
xs_read(/local/domain/1/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/1/log-throttling'
medium change watch on `/local/domain/1/log-throttling' - unknown device, ignored
vga s->lfb_addr = f1000000 s->lfb_end = f1800000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state.
mapping vram to f1000000 - f1800000
Unknown PV product 2 loaded in guest
PV driver build 1
region type 1 at [c100,c200).
region type 0 at [f1800000,f1800100).
squash iomem [f1800000, f1800100).
vga s->lfb_addr = f1000000 s->lfb_end = f1800000
vga s->lfb_addr = f1000000 s->lfb_end = f1800000
xc: error: linux_gnttab_set_max_grants: ioctl SET_MAX_GRANTS failed (22 = Invalid argument): Internal error xen be: qdisk-832: xen be: qdisk-832: xc_gnttab_set_max_grants failed: Invalid argument
xc_gnttab_set_max_grants failed: Invalid argument
xen be: qdisk-832: xen be: qdisk-832: reading backend state failed
reading backend state failed
xen be: qdisk-832: xen be: qdisk-832: reading backend state failed
reading backend state failed
-------------------------

-------------------------
xl-W7.log
----------
Waiting for domain W7 (domid 1) to die [pid 4029]
-------------------------

Can you post me details and versions of your system installation (dom0 domU GPLPV) with which you have cd-eject and network on restore working, please?

Attachment: smime.p7s
Description: Firma crittografica S/MIME

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.