|   xen-users
Re: [Xen-users] Xen with HA 
| 
Tim Post wrote:
 No, not is correct. In etch of the 6 componets (vm{1-6}) I have one drbd 
connection for mirroring between 2 Logicals Volume LVM. Example: vm1 use 
the drbd1 that is attached in /dev/vg/vm1-disk in each hosts. The LV 
/dev/vg/vm1-disk exist in each host and has the same size. The drbd 
block is active only in host A or B. Supposed that the vm1 is actived at 
host A. If host A was crashed, the heartbeat at host B detect it and 
active the drbd1 and boot the vm1.
On Mon, 2007-01-08 at 12:18 -0200, Marco Sinhoreli wrote:
 
Hello everybody,
We are thinking about using virtual machines for a scene of high 
availability where is contemplated the division of the VMs in two hosts. 
Considering that both hosts will have 6 VMs and in the host A will be 
active the  VMs vm1, vm3 and vm5 , while in the host B will be active 
the VMs vm2, vm4 and vm6. For each VBD allocated in one LV (Logical 
volume of the LVM) we pretend to implement the following architecture. 
    P = primary | S = Secondary
------------------------------------
              Host A
------------------------------------
 vm1   vm2   vm3   vm4   vm5   vm6
  |     |     |     |     |     |
drbd1 drbd2 drbd3 drbd4 drbd5 drbd6
  P     S     P     S     P     S
------------------------------------
  |     |     |     |     |     |
  |     |     NETWORK     |     |
  |     |     |     |     |     |
------------------------------------
  S     P     S     P     S     P
drbd1 drbd2 drbd3 drbd4 drbd5 drbd6
  |     |     |     |     |     |
 vm1   vm2   vm3   vm4   vm5   vm6
------------------------------------
              Host B
------------------------------------
Consider that, where the block device is primary, the VM is active on 
the host, and where the device blocks  is secondary, the VM is inactive 
on this host.
 
You may find this is easier utilizing network attached (centralized)
storage in lieu of block level mirroring, however your explanation is
somewhat confusing. 
>From your block diagram, it looks as though you plan to live-migrate in
the event of failure or scheduled shutdown, using nbd to maintain the
file system for each of the 4 components making up host A or B, each
having separate file systems. Is this correct?
 
Other possibility is scheduler shutdown, in this case the host A stop 
the services and the heartbeat service do a live-migration of the vm1 
and desactive the drbd1. At host B, the heartbeat service receive the 
call for active the resource vm1 and active the drbd1 and verifies if 
vm1 is active, in afirmative case, finishes the script, in negative 
case, call the boot of the vm1. 
Other alternative is to use drbd8. It allows the drbd work with block 
devices active in 2 hosts as in the NAS. But drbd8 is an alfa version. The vm1 at Host A and B not access the same file system at the same 
time. Only one host the vm1 will access the file system as I described.The heartbeat will have one responsible script for the VMs allocation in 
case of hardware failure from one of the hosts for your even. In case of 
programmed stop, the host from the  VMs will do the migration for it's
even without the stop of the services that are been executed. Will occur 
a prolongated downtime and full lost of the VMs activities only in case 
of a abrupt stop of the servers hardware.
    
 
So at no point will two active VM's be accessing the same file system,
correct? This would be easier if the file systems were on a raid backed
NAS.
 We wish information from the list if this solution have been done by 
someone,in positive case, if there is some documentation that 
contemplate this implementation ,and in negative case, how to do an 
implementation of a HA integrated with Xen.
    
 
Your solution should work exactly as you described it, if I'm
understanding it correctly. My only recommendation is to ditch block
level mirroring and go with centralized storage.
 
ok
 You have done a similar configuration of HA and Xen using DRBD and 
heartbeat?
You can, of course, use NBD to mirror the storage if a decent RAID
doesn't give you the desired comfort level. What you have , in essence
is four single pole double throw switches (an over simplified example),
ensuring there is no "short" (no two P/S domains access the same FS at
once) and you're fine :)
A little over paranoid considering I/O costs, but should work well :)
 
Regards,
 
Best regards
Marco Sinhoreli
Linux expecialist
Samurai Projetos Especiais
São Paulo - Brazil
 
Best,
--Tim
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 | 
 |  |