WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

AW: [Xen-users] xm migrate headache

To: "Rainer Sokoll" <rainer@xxxxxxxxxx>
Subject: AW: [Xen-users] xm migrate headache
From: "Rustedt, Florian" <Florian.Rustedt@xxxxxxxxxxx>
Date: Tue, 3 Mar 2009 18:35:20 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 03 Mar 2009 09:36:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090303143537.GG3480@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <49ACDB1B020000E500005D72@xxxxxxxxxxxxxxxxxxxxx><49ACDBD60200009900035774@xxxxxxxxxxxxxxxxxxxxx><49ACDBD60200009900035774@xxxxxxxxxxxxxxxxxxxxx> <20090303143537.GG3480@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmcDUYj5HUDOnzyT2eeKA1xg0vsrwAFqZVQ
Thread-topic: [Xen-users] xm migrate headache
Hello Rainer,

I've got pretty the same issue (drbd 8.3.0):
I've got a huge drbd in pri/pri-role. Then ontop LVM with xfs in the LV's.
Now the XM-VM boots into the LV, no prob. 

If i now want to migrate, i get the same error as you, but in my case, the drbd 
logs something like "split-brain detected -> disconnected" in dmesg, perhaps 
you've got the same?

drbd0: Split-Brain detected, dropping connection!                               
                                             
drbd0: self 20D88E3F20F7E8C9:11227E17F1A34EBD:F14436F7DEC14D2E:D51BA840A9E19E2D 
                                             
drbd0: peer 5664952031DE8E53:11227E17F1A34EBD:F14436F7DEC14D2E:D51BA840A9E19E2D 
                                             
drbd0: helper command: /sbin/drbdadm split-brain minor-0                        
                                             
drbd0: helper command: /sbin/drbdadm split-brain minor-0 exit code 0 (0x0)      
                                             
drbd0: conn( WFReportParams -> Disconnecting )                                  
                                             
drbd0: error receiving ReportState, l: 4!                                       
                                             
drbd0: asender terminated                                                       
                                             
drbd0: Terminating asender thread                                               
                                             
drbd0: Connection closed                                                        
                                             
drbd0: conn( Disconnecting -> StandAlone )                                      
                                             

And on the other node

drbd0: meta connection shut down by peer.                                       
                                       
drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( 
UpToDate -> DUnknown )                     
drbd0: asender terminated                                                       
                                       
drbd0: Terminating asender thread                                               
                                       
drbd0: sock was shut down by peer                                               
                                       
drbd0: short read expecting header on sock: r=0                                 
                                       
drbd0: Creating new current UUID                                                
                                       
drbd0: Connection closed                                                        
                                       
drbd0: conn( NetworkFailure -> Unconnected )                                    
                                       
drbd0: receiver terminated                                                      
                                       

It seems, that while migrating there is some overlapping write access to the 
drbd, what let's drbd decide to have s split-brain.

Even if i
 after-sb-0pri discard-zero-changes;
 after-sb-1pri violently-as0p;
 after-sb-2pri violently-as0p;
I get this log-entries....

So that's the point where a DLM-aware filesystem comes into play.
The question is now, how we have to "mix" drbd, lvm and ocfs2/gfs for a working 
playground... ;)

I previously had implemented the drbd -> ocfs2 -> sparse-files-way, that one 
failed with drbd-split-brain, too.

Perhaps drbd -> lvm -> ocfs2 and mounting that volume in the vm could do the 
trick?

Cheers, Florian

> -----Ursprüngliche Nachricht-----
> Von: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von 
> Rainer Sokoll
> Gesendet: Dienstag, 3. März 2009 15:36
> An: Nick Couchman
> Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> Betreff: Re: [Xen-users] xm migrate headache
> 
> On Tue, Mar 03, 2009 at 07:27:18AM -0700, Nick Couchman wrote:
> 
> > So, in your Xen configs, are you using phy: to access the 
> DRBD devices 
> > directly?
> 
> Yes, you are correct. I've used a file based storage backend 
> before, but it was too slow.
> 
> drbd.conf:
> ----8<----
> resource stunnel {
>   on jitxen01 {
>     address 192.168.0.1:7794;
>     device /dev/drbd4;
>     disk /dev/XEN/stunnel;
>     meta-disk internal;
>   }
>   on jitxen02 {
>     address 192.168.0.2:7794;
>     device /dev/drbd4;
>     disk /dev/XEN/stunnel;
>     meta-disk internal;
>   }
>   net {
>     allow-two-primaries;
>     after-sb-0pri discard-zero-changes;
>     after-sb-1pri discard-secondary;
>   }
> }
> ----8<----
> 
> machine config in xen:
> 
> ----8<----
> name="stunnel"
> [...]
> disk=[ 'phy:/dev/drbd4,xvda,w', ]
> ----8<----
> 
> > If so, then you should be okay without a cluster-aware filesystem
> 
> Now I feel much better :-)
> 
> > - sorry about that, I was under the impression that you 
> were mounting 
> > the DRBD device on the dom0s and then storing disk files on it.
> 
> This sounds like a not-so-clever idea to me :-)
> 
> Rainer
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
**********************************************************************************************
IMPORTANT: The contents of this email and any attachments are confidential. 
They are intended for the 
named recipient(s) only.
If you have received this email in error, please notify the system manager or 
the sender immediately and do 
not disclose the contents to anyone or make copies thereof.
*** eSafe scanned this email for viruses, vandals, and malicious content. ***
**********************************************************************************************


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>