WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] drbd, xen and disk not accessible..

To: "Ross S. W. Walker" <rwalker@xxxxxxxxxxxxx>
Subject: Re: [Xen-users] drbd, xen and disk not accessible..
From: "Marco Strullato" <marco.strullato@xxxxxxxxx>
Date: Thu, 24 Apr 2008 17:09:48 +0200
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 24 Apr 2008 08:11:47 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=Bt5Q4LEwC1Tnxt3gcv11Ute2wOoTozzsDUPMFDFC6gA=; b=Kz03mEMIga+Tq/2xSgmr3fLOZlfez+HCqtwt2uPM5NHY5+5PYdZPbFgfK/+5X+GW+j1hT+l9+BkroGyiUzjh/OQUeSUNzOmWXDllIgU8dNPXBgpwOpGgDQ/NNLDrL5Xg+mxPCzj4ihZo/2LN8PMTlfcY+jnSC7rU3eMBCVGhLdQ=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Kj7OYfOKFJC3rD04wZR7SuJGqkuxGvHARVkQt1hkDPri4C2TA74G2cKvuVcg2dufI6vWkEUszdBgRfL02FSxson30j5m8qxcNkj3SG+SGkokiDcu5mst+eySKZiYNqSm6Q/LHAxWzGb469bS4aFRLU/tWGLsAI/48CpHumnd044=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <E2BB8074E5500C42984D980D4BD78EF9022A713A@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <b9f669850804240741m3c12b2f0yd5cef278f0e3bccb@xxxxxxxxxxxxxx> <E2BB8074E5500C42984D980D4BD78EF9022A713A@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi, thanks for the answer.
As you suggested I changed the configuration and the domU starts.


Unluckly now I want to migrate the domU to another hypervisor:

This is the configuration of one hyp:
(xend-unix-server yes)
(xend-relocation-server yes)
(xend-unix-path /var/lib/xend/xend-socket)
(xend-relocation-port 8002)
(xend-relocation-address '')
(xend-relocation-hosts-allow '')
(network-script 'network-bridge')
(vif-script vif-bridge)
(dom0-min-mem 256)
(dom0-cpus 0)
(vncpasswd '')

and this is the configuration of the other one:
(xend-unix-server yes)
(xend-relocation-server yes)
(xend-unix-path /var/lib/xend/xend-socket)
(xend-relocation-port 8002)
(xend-relocation-address '')
(xend-relocation-hosts-allow '')
(network-script network-bridge)
(vif-script vif-bridge)
(dom0-min-mem 256)
(dom0-cpus 0)
(vncpasswd '')

The command I execute is
[root@hyp10 ~]# xm migrate --live SLSPTEST hyp11
Error: /usr/lib64/xen/bin/xc_save 27 13 0 0 1 failed

As you can see the migration fails.

This is the error log on the original hyp
[2008-04-24 17:02:00 8572] ERROR (XendDomainInfo:1950)
XendDomainInfo.resume: xc.domain_resume failed on domain 13.
Traceback (most recent call last):
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 1944, in resumeDomain
    self._createDevices()
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 1506, in _createDevices
    devid = self._createDevice(devclass, config)
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 1478, in _createDevice
    return self.getDeviceController(deviceClass).createDevice(devConfig)
  File "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py",
line 113, in createDevice
    raise VmError("Device %s is already connected." % dev_str)
VmError: Device xvda (51712, vbd) is already connected.

It seems the domU device xvda is already connected. This error seems
due to a drbd lock.

What do you think? What should I check?




Thanks


Marco

ps I'm going to write into the drbd mailing list to verify them guide...



2008/4/24, Ross S. W. Walker <rwalker@xxxxxxxxxxxxx>:
> Marco Strullato wrote:
>  >
>  > Hi all,
>  > I set up two systems with centos5 64 bit, xen 3.2 (rebuit
>  > from src.rpm), drbd.
>  > At first I installed a centos4.5 32 bit using the device /dev/drbd0
>  > (is it possible to use the drbd resource at this step?) and then I
>  > dump the configuration, I changed the driver name, source dev and I
>  > added the kernel, ramdisk and root parameters.
>  >
>  > This is my configuration xml
>  >
>  > <domain type='xen' id='-1'>
>  >   <name>SLSPTEST</name>
>  >   <uuid>10147595b176607d804d0e1dc1d2103d</uuid>
>  >   <bootloader>/usr/bin/pygrub</bootloader>
>  >   <os>
>  >     <type>linux</type>
>  >   </os>
>  >   <memory>2097152</memory>
>  >   <vcpu>1</vcpu>
>  >   <on_poweroff>destroy</on_poweroff>
>  >   <on_reboot>restart</on_reboot>
>  >   <on_crash>restart</on_crash>
>  >   <devices>
>  >     <interface type='bridge'>
>  >       <source bridge='xenbr0'/>
>  >       <mac address='00:16:3e:44:d3:9b'/>
>  >     </interface>
>  >     <disk type='block' device='disk'>
>  >       <driver name='drbd'/>
>  >       <source dev='r0'/>
>  >       <target dev='xvda'/>
>  >     </disk>
>  >   </devices>
>  >   <kernel>/boot/vmlinuz-2.6.9-67.0.7.ELxenU</kernel>
>  >   <ramdisk>/boot/initrd-2.6.9-67.0.7.ELxenU.img</ramdisk>
>  >   <root>ro root=/dev/VolGroup00/LogVol00 console=xvc0 selinux=0</root>
>  > </domain>
>  >
>  > The drbd configuration is:
>  >
>  > global {
>  >         usage-count yes;
>  >  }
>  > common {
>  >         protocol C;
>  >         disk {
>  >                 on-io-error detach;
>  >         }
>  >         syncer {
>  >                 verify-alg md5;
>  >                 rate 50M;
>  >         }
>  >
>  > }
>  > resource r0 {
>  >         startup {
>  >                 become-primary-on both;
>  >         }
>  >         net {
>  >                 allow-two-primaries;
>  >         }
>  >         on hyp11.infolan {
>  >                 device     /dev/drbd0;
>  >                 disk       /dev/HYP11VM/VMNAME;
>  >                 address    10.100.0.2:7788;
>  >                 meta-disk  internal;
>  >         }
>  >         on hyp10.infolan {
>  >                 device    /dev/drbd0;
>  >                 disk      /dev/HYP10VM/VMNAME;
>  >                 address   10.100.0.1:7788;
>  >                 meta-disk internal;
>  >         }
>  > }
>  >
>  >
>  > Everythig seems to be ready: I loaded the configuration file
>  > successfully with virsh define SLSPTEST and the drbd resource is set
>  > up according to the drdb guide (dual primary mode enabled).
>  >
>  > [root@hyp10 scripts]# cat /proc/drbd
>  > version: 8.2.5 (api:88/proto:86-88)
>  > GIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by
>  > buildsvn@c5-x8664-build, 2008-03-09 10:16:12
>  >  0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
>  >     ns:12539333 nr:0 dw:1005385 dr:11578691 al:558 bm:704
>  > lo:0 pe:0 ua:0 ap:0
>  >         resync: used:0/31 hits:720168 misses:704 starving:0
>  > dirty:0 changed:704
>  >         act_log: used:0/127 hits:272661 misses:558 starving:0 dirty:0
>  > changed:558
>  >
>  > Unluckly when I execute xm start SLSPTEST I get
>  >
>  > Error: Disk isn't accessible
>  >
>  > The xend log is
>  >
>  > 2008-04-24 16:36:56 8572] ERROR (XendBootloader:43) Disk
>  > isn't accessible
>  > [2008-04-24 16:36:56 8572] ERROR (XendDomainInfo:440) VM start failed
>  > Traceback (most recent call last):
>  >   File
>  > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
>  > line 420, in start
>  >     XendTask.log_progress(31, 60, self._initDomain)
>  >   File "/usr/lib64/python2.4/site-packages/xen/xend/XendTask.py", line
>  > 209, in log_progress
>  >     retval = func(*args, **kwds)
>  >   File
>  > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
>  > line 1694, in _initDomain
>  >     self._configureBootloader()
>  >   File
>  > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
>  > line 2050, in _configureBootloa
>  > der
>  >     bootloader_args, kernel, ramdisk, args)
>  >   File
>  > "/usr/lib64/python2.4/site-packages/xen/xend/XendBootloader.py",
>  > line 44, in bootloader
>  >     raise VmError(msg)
>  > VmError: Disk isn't accessible
>  > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1883)
>  > XendDomainInfo.destroy: domid=12
>  > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1900)
>  > XendDomainInfo.destroyDomain(12)
>  > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1524) No device model
>  > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1526)
>  > Releasing devices
>  >
>  >
>  > How could I solve this problem? I want to use the suggested
>  > configuration using the drbd driver but it does't work.
>
>
> Marco,
>
>  When you do an 'rpm -qa | grep xen' does it show both
>  xen-3.2.0 and xen-libs-3.2.0 as installed? They should
>  be given the dependencies. If so then I would ask on
>  the drbd list why their drbd type doesn't work as
>  shown on their wiki. Maybe it was excluded from Xen
>  and when they wrote the wiki page they were hoping it
>  would have been adopted.
>
>  It doesn't really matter anyways, because listing the
>  device as phy:drbd0 would give you the exact same
>  result, which is attach xenblk on the backend.
>
>
>  -Ross
>
>  ______________________________________________________________________
>  This e-mail, and any attachments thereto, is intended only for use by
>  the addressee(s) named herein and may contain legally privileged
>  and/or confidential information. If you are not the intended recipient
>  of this e-mail, you are hereby notified that any dissemination,
>  distribution or copying of this e-mail, and any attachments thereto,
>  is strictly prohibited. If you have received this e-mail in error,
>  please immediately notify the sender and permanently delete the
>  original and any copy or printout thereof.
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users