This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] DRBD and XEN

To: drbd-user@xxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] DRBD and XEN
From: Lee LIsts <lists@xxxxxxxxxxxxxxx>
Date: Mon, 12 Dec 2005 10:32:12 +0100
Delivery-date: Mon, 12 Dec 2005 09:33:55 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.6-7.1.20060mdk (X11/20050322)

I'am trying to use DRBD with XEN 3.0, the xen kernel is an and i am using DRBD 0.7.14.
The primary node is an opensuse 10 linux (not virtualized).
The 2 volumes are LVM logical volumes with internal metadatas, there size is 54G.

The secondary node is an opensuse 10 also but running over xen domain-0.

When i setup drbd, synchronisation starts, but the whole system just reboots a few seconds after.

I don't know if the problem is xen or drbd related ? Unfornately I have no traces on the xen node.

I set "ecbo 60 > /proc/sys/kernel/panic", but it doesn't seem to be a kernel panic, but a "xen" panic.

Please help.

 syncer {
   # Limit the bandwith used by the resynchronisation process.
   # default unit is KB/sec; optional suffixes K,M,G are allowed
   rate 4M;

   # All devices in one group are resynchronized parallel.
   # Resychronisation of groups is serialized in ascending order.
   # Put DRBD resources which are on different physical disks in one group.
   # Put DRBD resources on one physical disk in different groups.
   group 1;

   # Configures the size of the active set. Each extent is 4M,
   # 257 Extents ~> 1GB active set size. In case your syncer
   # runs @ 10MB/sec, all resync after a primary's crash will last
   # 1GB / ( 10MB/sec ) ~ 102 seconds ~ One Minute and 42 Seconds.
   # BTW, the hash algorithm works best if the number of al-extents
   # is prime. (To test the worst case performace use a power of 2)
   al-extents 257;

 on teddy {
   device     /dev/drbd0;
   disk       /dev/sysb/wtmp2;
   meta-disk  internal;

   # meta-disk is either 'internal' or '/dev/ice/name [idx]'
   # You can use a single block device to store meta-data
   # of multiple DRBD's.
   # E.g. use meta-disk /dev/hde6[0]; and meta-disk /dev/hde6[1];
   # for two different resources. In this case the meta-disk
   # would need to be at least 256 MB in size.
   # 'internal' means, that the last 128 MB of the lower device
   # are used to store the meta-data.
   # You must not give an index with 'internal'.

 on bear {
   device    /dev/drbd0;
   disk      /dev/xvg/wtmp2;
   meta-disk internal;

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] DRBD and XEN, Lee LIsts <=