WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: [Rocks-Discuss] Rocks or Virtual Cluster?

To: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Re: [Rocks-Discuss] Rocks or Virtual Cluster?
From: "lists@xxxxxxxxxxxx" <lists@xxxxxxxxxxxx>
Date: Mon, 19 Jan 2009 19:32:50 -0600
Delivery-date: Mon, 19 Jan 2009 17:34:14 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CFD807D83949B74DA54AB08DF66B333E02324386@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: lists@xxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Oh wait now, I think I misunderstood this. So, rocks really is not something 
used for say a redundant xen cluster for example. The way the article reads, I 
thought it was saying that rocks was a bit of both worlds, HPC, SSI type 
applications AND a distributed cluster is how I understood this. Now I think 
I'm seeing that rocks isn't at all what I was thinking or what the article 
eludes to for newbies.

Darn! A clustered, redundant xen environment, it sounded too good to be true. 
Then again, I'm sure there are other tools out there that do what I'm after.

Mike



On Mon, 19 Jan 2009 11:56:06 -0800, Bart Brashers wrote:
>
>
>�Rocks is really all about high performance computing (doing math) and not
>�redundancy. �There is no support for building redundant frontends, for
>�example. �Rocks can create multiple virtual compute nodes inside a physical
>�compute node, but I don't know if Xen can move them from node to node. �I
>�suspect not. �The virtual compute nodes that it creates are also tailored
>�to HPC, by default.
>
>�You might be better served just using Xen without Rocks. �Since you most
>�likely would have to create a highly customized virtual machine environment
>�to suite your needs, you won't really be taking advantage of the Rocks
>�aspect of things.
>
>�My $0.02.
>
>�Bart
>
>>�No one has read the article??? :)
>>�Must be someone on here who can give me a little input on this. Trying to
>>�find out if the article is
>>�talking about creating a virtual environment inside of a cluster or using
>>�multiple machines to create
>>�a rocks cluster.
>>�
>>�Mike
>>�
>>�
>>�On Thu, 15 Jan 2009 12:06:32 -0600, lists@xxxxxxxxxxxx wrote:
>>>� Hi folks,
>>>�
>>>� Looking at rocks as a possible solution for better redundant services.
>>>� While my redhat GFS cluster has been useful, it is not as useful as
>>>�what
>>>� rocks appears to potentially be. From what I'm reading, it sounds like
>>>� rocks would give me a great deal more usability. The Linux magazine
>>>�article
>>>� I read seems to say that I can build a redundant cluster of VMware/Xen
>>>� backends and even at the same time, get the benefits of an SSI
>>>�cluster? Is
>>>� this true?
>>>�
>>>� Based on the article, I seek a little additional information so that
>>>�I can
>>>� get started on my first rocks cluster. Sorry if I don't have the
>>>� terminology correct just yet.
>>>�
>>>� In my application, I would like to use rocks as a physical cluster
>>>�that
>>>� would allow me to have redundant VMware and Xen servers.
>>>�
>>>� So for example:
>>>� -head node/controller
>>>� -server node 1 - redundant
>>>� -server node 2 - redundant
>>>� -server node 3 - redundant
>>>�
>>>� The back end servers would run redundant VMware/Xen servers. The
>>>�guests
>>>� would be LAMP servers along with other network resources.
>>>� I also use a mix of fibre channel and Ethernet storage systems. Some
>>>�is
>>>� connected directly to servers, some connected to filer heads which
>>>�export
>>>� CIFS, NFS, etc.
>>>�
>>>� I have plenty of physical boxes to start working on take.
>>>� My very first question, based on the short article I read is as
>>>�follows.
>>>�
>>>� I am assuming the head node requires less powerful resources because
>>>�it is
>>>� not going to host any guests, since it is only the controller. But,
>>>�while
>>>� the article mentions CPU/RAM, it's not very clear on what the real
>>>� requirements might be. Should I use a very powerful server for the
>>>� controller node or am I wasting resources? I'm guessing it is mostly
>>>�just
>>>� redirecting traffic so if anything, might need good I/O speeds? If the
>>>� traffic doesn't flow through the controller, then I could see that it
>>>�might
>>>� not need that much speed, just good availability, accessibility.
>>>�
>>>� Finally, one of the problems I am having is that VMware Server
>>>�doesn't seem
>>>� to have redundant capabilities and this is what I badly need. I'm
>>>�using
>>>� VMware Server 2.0 for win servers so the problem is that if a machine
>>>�needs
>>>� to go down or what ever, I have to shut down the guests, move them to
>>>� another server, fire them back up, it's simply not efficient. I think
>>>�Xen
>>>� allows for redundancy of Linux guests, but I'm not sure about win
>>>�machines.
>>>� The article seems to suggest that using rocks, I can get redundancy
>>>�for
>>>� either.
>>>�
>>>� Thanks for any help you can provide!
>>>�
>>>� Mike
>
>
>�This message contains information that may be confidential, privileged or
>�otherwise protected by law from disclosure. It is intended for the
>�exclusive use of the Addressee(s). Unless you are the addressee or
>�authorized agent of the addressee, you may not review, copy, distribute or
>�disclose to anyone the message or any information contained within. If you
>�have received this message in error, please contact the sender by
>�electronic reply to email@xxxxxxxxxxxxxxx and immediately delete all copies
>�of the message.



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>