WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: [Xen-devel] tap:qcow causes dom-U to hang in 3.0.3

To: Roland Paterson-Jones <roland@xxxxxxxxxxxx>
Subject: Re: [Xen-users] Re: [Xen-devel] tap:qcow causes dom-U to hang in 3.0.3
From: Julian Chesterfield <jac90@xxxxxxxxx>
Date: Fri, 10 Nov 2006 10:15:04 +0000
Cc: Xen Devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 10 Nov 2006 02:23:36 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4552DF8D.6060600@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4551EEC3.3010308@xxxxxxxxxxxx> <20061108151133.GE3507@xxxxxxxxxxxxxxxxxxxxxx> <4552DF8D.6060600@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Roland,

Can you also verify whether there's an active tapdisk process running in Dom0 for each tap:{aio,qcow} vbd. We are aware of a bug with the qcow implementation that we hope to submit a fix for very soon. It's likely that you are seeing the same issue.

- Julian

On 9 Nov 2006, at 07:58, Roland Paterson-Jones wrote:

Ewan Mellor wrote:

On Wed, Nov 08, 2006 at 04:50:43PM +0200, Roland Paterson-Jones wrote:


[root@dom0]# xm list
Error: Device 2050 not connected
Usage: xm list [options] [Domain, ...]

Curiously, it seems that device 2050 was a secondary block device exposed using phy:.

So, any advice on how to debug this?


Could we see your /var/log/xen/*  and the output of xenstore-ls?

Logs attached. xenstore-ls below.

I had two domains running. One with root-fs using tap:aio:/mnt/instance_image_store_1/3811, and a second dom-U on tap:qcow:/mnt/instance_image_store_0/3811.qcow, which was set up as a COW overlay of image file /mnt/instance_image_store_0/3811 (not the same as the tap:aio: image for the first domU - the directories differ in their last digit). Then both dom-U's also had/have a swap partition (vbd phy:) on /dev/VolGroupDomU/instance_swap_store_1/0 and a second vbd phy: on /dev/VolGroupDomU/instance_ephemeral_store_1/0.

The 'xenstore-ls' also shows qcow:/mnt/instance_image_store_2/3811.qcow which I don't think should be there at all (maybe from a previous failed attempt).

Could this be related to tap:qcow: mounts not umounting? I previously set up a test tap:qcow: device in dom0 using 'block-attach 0 ...', then mounted it, which seemed to work OK. However 'umount ...' then hung (seemingly in the kernel - I had to reboot to get rid of it).

Thx Roland

-----------------------------------------
[root@dom0 ~]# xenstore-ls
tool = ""
xenstored = ""
vm = ""
00000000-0000-0000-0000-000000000000 = ""
 shadow_memory = "0"
 uuid = "00000000-0000-0000-0000-000000000000"
 on_reboot = "restart"
 on_poweroff = "destroy"
 name = "Domain-0"
 xend = ""
  restart_count = "0"
 vcpus = "2"
 vcpu_avail = "3"
 memory = "1023"
 on_crash = "restart"
 maxmem = "1023"
00000000-0000-0000-0000-0000ec291715 = ""
image = "(linux (kernel /boot/vmlinuz-2.6.16.29-xen) (ramdisk /boot/initrd-2.6.16.29-xen.img) (..."
  ostype = "linux"
  kernel = "/boot/vmlinuz-2.6.16.29-xen"
  cmdline = " root=/dev/sda1 ro 4"
  ramdisk = "/boot/initrd-2.6.16.29-xen.img"
 shadow_memory = "0"
 uuid = "00000000-0000-0000-0000-0000ec291715"
 on_reboot = "restart"
 start_time = "1162994682.56"
 on_poweroff = "destroy"
 name = "dom_91715"
 xend = ""
  restart_count = "0"
 vcpus = "1"
 vcpu_avail = "1"
 memory = "1700"
 on_crash = "restart"
 maxmem = "1700"
local = ""
domain = ""
 0 = ""
  cpu = ""
   0 = ""
    availability = "online"
   1 = ""
    availability = "online"
  memory = ""
   target = "1047552"
  name = "Domain-0"
  console = ""
   limit = "1048576"
  vm = "/vm/00000000-0000-0000-0000-000000000000"
  domid = "0"
  backend = ""
   tap = ""
    1 = ""
     2049 = ""
      domain = "dom_91715"
      frontend = "/local/domain/1/device/vbd/2049"
      dev = "sda1"
      state = "4"
      params = "aio:/mnt/instance_image_store_1/3811"
      mode = "w"
      online = "1"
      frontend-id = "1"
      type = "tap"
      sectors = "3123200"
      sector-size = "512"
      info = "0"
      hotplug-status = "connected"
    2 = ""
     2049 = ""
      domain = "dom_91721"
      frontend = "/local/domain/2/device/vbd/2049"
      dev = "sda1"
      state = "4"
      params = "qcow:/mnt/instance_image_store_0/3811.qcow"
      mode = "w"
      online = "1"
      frontend-id = "2"
      type = "tap"
      sectors = "3123200"
      sector-size = "512"
      info = "0"
      hotplug-status = "connected"
    3 = ""
     2049 = ""
      domain = "dom_91723"
      frontend = "/local/domain/3/device/vbd/2049"
      dev = "sda1"
      state = "1"
      params = "qcow:/mnt/instance_image_store_2/3811.qcow"
      mode = "w"
      online = "1"
      frontend-id = "3"
      type = "tap"
      sectors = "3123200"
      sector-size = "512"
      info = "0"
   vbd = ""
    1 = ""
     2050 = ""
      domain = "dom_91715"
      frontend = "/local/domain/1/device/vbd/2050"
      dev = "sda2"
      state = "4"
      params = "/dev/VolGroupDomU/instance_ephemeral_store_1"
      mode = "w"
      online = "1"
      frontend-id = "1"
      type = "phy"
      physical-device = "fd:6"
      hotplug-status = "connected"
      sectors = "312737792"
      info = "0"
      sector-size = "512"
     2051 = ""
      domain = "dom_91715"
      frontend = "/local/domain/1/device/vbd/2051"
      dev = "sda3"
      state = "4"
      params = "/dev/VolGroupDomU/instance_swap_store_1"
      mode = "w"
      online = "1"
      frontend-id = "1"
      type = "phy"
      physical-device = "fd:7"
      hotplug-status = "connected"
      sectors = "1835008"
      info = "0"
      sector-size = "512"
    2 = ""
     2050 = ""
      domain = "dom_91721"
      frontend = "/local/domain/2/device/vbd/2050"
      dev = "sda2"
      state = "4"
      params = "/dev/VolGroupDomU/instance_ephemeral_store_0"
      mode = "w"
      online = "1"
      frontend-id = "2"
      type = "phy"
      physical-device = "fd:3"
      hotplug-status = "connected"
      sectors = "312737792"
      info = "0"
      sector-size = "512"
     2051 = ""
      domain = "dom_91721"
      frontend = "/local/domain/2/device/vbd/2051"
      dev = "sda3"
      state = "4"
      params = "/dev/VolGroupDomU/instance_swap_store_0"
      mode = "w"
      online = "1"
      frontend-id = "2"
      type = "phy"
      physical-device = "fd:4"
      hotplug-status = "connected"
      sectors = "1835008"
      info = "0"
      sector-size = "512"
    3 = ""
     2050 = ""
      domain = "dom_91723"
      frontend = "/local/domain/3/device/vbd/2050"
      dev = "sda2"
      state = "1"
      params = "/dev/VolGroupDomU/instance_ephemeral_store_2"
      mode = "w"
      online = "1"
      frontend-id = "3"
      type = "phy"
     2051 = ""
      domain = "dom_91723"
      frontend = "/local/domain/3/device/vbd/2051"
      dev = "sda3"
      state = "1"
      params = "/dev/VolGroupDomU/instance_swap_store_2"
      mode = "w"
      online = "1"
      frontend-id = "3"
      type = "phy"
   vif = ""
    1 = ""
     0 = ""
      domain = "dom_91715"
      handle = "0"
      script = "/etc/xen/scripts/vif-aes"
      state = "4"
      frontend = "/local/domain/1/device/vif/0"
      mac = "12:31:34:00:03:8F"
      online = "1"
      frontend-id = "1"
      feature-sg = "1"
      feature-gso-tcpv4 = "1"
      feature-rx-copy = "1"
      hotplug-status = "connected"
    3 = ""
     0 = ""
      domain = "dom_91723"
      handle = "0"
      script = "/etc/xen/scripts/vif-aes"
      state = "5"
      frontend = "/local/domain/3/device/vif/0"
      mac = "12:31:34:00:03:8D"
      online = "0"
      frontend-id = "3"
  error = ""
   backend = ""
    tap = ""
     1 = ""
      2049 = ""
       error = "2 getting info"
     2 = ""
      2049 = ""
       error = "2 getting info"
 1 = ""
  device = ""
   vbd = ""
    2049 = ""
     backend-id = "0"
     virtual-device = "2049"
     device-type = "disk"
     state = "4"
     backend = "/local/domain/0/backend/tap/1/2049"
     ring-ref = "8"
     event-channel = "6"
    2050 = ""
     backend-id = "0"
     virtual-device = "2050"
     device-type = "disk"
     state = "4"
     backend = "/local/domain/0/backend/vbd/1/2050"
     ring-ref = "9"
     event-channel = "7"
    2051 = ""
     backend-id = "0"
     virtual-device = "2051"
     device-type = "disk"
     state = "4"
     backend = "/local/domain/0/backend/vbd/1/2051"
     ring-ref = "10"
     event-channel = "8"
   vif = ""
    0 = ""
     backend-id = "0"
     mac = "12:31:34:00:03:8F"
     handle = "0"
     state = "4"
     backend = "/local/domain/0/backend/vif/1/0"
     tx-ring-ref = "523"
     rx-ring-ref = "524"
     event-channel = "9"
     request-rx-copy = "0"
     feature-rx-notify = "1"
     feature-sg = "1"
     feature-gso-tcpv4 = "1"
  device-misc = ""
   vif = ""
    nextDeviceID = "1"
  console = ""
   ring-ref = "995310"
   port = "2"
   limit = "1048576"
   tty = "/dev/pts/2"
  name = "dom_91715"
  vm = "/vm/00000000-0000-0000-0000-0000ec291715"
  domid = "1"
  cpu = ""
   0 = ""
    availability = "online"
  memory = ""
   target = "1740800"
  store = ""
   ring-ref = "995311"
   port = "1"


Thanks,

Ewan.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users




<xen-logs.tgz>_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel