[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Test report of xen 4.2.0-rc4



Il 10/09/2012 16:27, Ian Campbell ha scritto:
On Mon, 2012-09-10 at 15:26 +0100, Fabio Fantoni wrote:
gdb --args xl -vvv cd-eject W7 hdb
GNU gdb (GDB) 7.4.1-debian
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/sbin/xl...done.
(gdb) run
Starting program: /usr/sbin/xl -vvv cd-eject W7 hdb
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
libxl: debug: libxl.c:2143:libxl_cdrom_insert: ao 0x623980: create:
how=(nil) callback=(nil) poller=0x6239e0
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk
vdev=hdb spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdb,
backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:210:disk_try_backend: Disk vdev=hdb,
backend tap unsuitable due to format empty
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk
vdev=hdb, using backend qdisk
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x623980:
complete, rc=0
libxl: debug: libxl.c:2236:libxl_cdrom_insert: ao 0x623980: inprogress:
poller=0x6239e0, flags=ic
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x623980: destroy
xc: debug: hypercall buffer: total allocations:4 total releases:4
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:1 misses:2 toobig:1
[Inferior 1 (process 5581) exited normally]
(gdb) bt
No stack.
That's because it seems to be working for you now... There is no crash
here.

Ian.



-----
Nessun virus nel messaggio.
Controllato da AVG - www.avg.com
Versione: 2012.0.2197 / Database dei virus: 2437/5259 -  Data di rilascio: 
09/09/2012


After issuing the command:
xl -vvv cd-eject PRECISEHVM hdb
libxl: debug: libxl.c:2143:libxl_cdrom_insert: ao 0x1812980: create: how=(nil) callback=(nil) poller=0x18129e0 libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdb, backend phy unsuitable as phys path not a block device libxl: debug: libxl_device.c:210:disk_try_backend: Disk vdev=hdb, backend tap unsuitable due to format empty libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdb, using backend qdisk libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x1812980: complete, rc=0 libxl: debug: libxl.c:2236:libxl_cdrom_insert: ao 0x1812980: inprogress: poller=0x18129e0, flags=ic
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x1812980: destroy
xc: debug: hypercall buffer: total allocations:4 total releases:4
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:1 misses:2 toobig:1

The cdrom remain in domU and working

I also tried with insert:
root@vfarm:~# xl -vvv cd-insert PRECISEHVM hdb raw:/mnt/vm/iso/Clonezilla.iso libxl: debug: libxl.c:2143:libxl_cdrom_insert: ao 0xedd980: create: how=(nil) callback=(nil) poller=0xedd9e0
Errore di segmentazione

But give segmentation error and nothing change on domU (remain the old cdrom working)

Can you tell me datails about your dom0 configuration please?

Attachment: smime.p7s
Description: Firma crittografica S/MIME

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.