Hi,
I can answer point 1. GlusterFS 3.3 (still in beta), does finer locking during
self-heal, which is what the VM images like.
Gerald
----- Original Message -----> From: "Miles Fidelman" <mfidelman at meetinghouse.net>
> To: gluster-users at gluster.org
> Sent: Tuesday, November 1, 2011 6:31:35 PM
> Subject: [Gluster-users] small cluster question
>
> Hi Folks,
>
> I'm in the process of expanding a 2-node, high-availability cluster
> to 4
> nodes. Of course that means that my current approach to mirroring
> data
> (DRBD) breaks, and I need to use either a SAN or a cluster file
> system -
> and GlusterFS sure looks like it might fit the bill.
>
> Two question:
>
> 1. I'm running a hypervisor (Xen) on each node, and my goal is to
> support VM migration for both load leveling and failover. As I
> understand it, earlier versions of Gluster weren't particularly
> friendly
> to VMs - with self-healing hanging client access. Am I correct in
> understanding that newer releases don't have this problem?
>
> 2. It looks like the standard Gluster configuration separates storage
> bricks from client (compute) nodes. Is it feasible to run virtual
> machines on the same servers that are hosting storage? (I'm working
> with 4 multi-core servers, each with 4 large drives attached - I'm
> not
> really in a position to split things up.)
>
> Any comments, advice, suggestions are most welcome.
>
> Thanks very much,
>
> Miles Fidelman
>
> --
> In theory, there is no difference between theory and practice.
> In<fnord> practice, there is. .... Yogi Berra
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>