I can offer a little general information. My understanding is this:
It is not like failover with a virtual IP. Instead, the gluster clients connect
to all storage servers at the same time. If one of them becomes unavailable, the
client can still reach the remaining one(s). Locks are preserved for all
remaining nodes. Writes are marked (in the metadata) as having been completed on
the remaining nodes, and NOT completed on whatever nodes is down. On access, the
file will be healed if the downed node has returned. Or you can force healing of
all files when the node comes back, simply by accessing all files with a
'find' command. See seal healing in the wiki for more information on
this.
I am not familiar with OpenQRM so I don't know if or how that would have be
tweaked for integration.
Chris
----- "Layer7 Consultancy" <info at layer7.be> wrote:
> Hi all,
>
> I am considering the built-in NFS functionality of Gluster to build a
> virtual server environment. The idea is to have 4 or 5 hosts (KVM or
> Xen) that all contain around 300GB of 15K rpm SAS storage in a RAID5
> array. On each of the host servers I would install a VM with the
> Gluster Platform and expose all of this storage through NFS to my
> OpenQRM installation, which would then host all the other VM's on the
> same servers.
> An alternative idea is to have the storage boxes separate from the VM
> hosts, but the basic idea stays the same I think.
>
> Now from what I understand, the NFS storage that is exposed to the
> clients is approached through the management IP of the first Gluster
> Platform server. My biggest question is what exactly happens when the
> first storage node goes down. Does the platform offer some kind of
> VRRP setup that fails over the IP to one of the other nodes? Is the
> lock information preserved and how does this all work internally?
>
> Since I would be using KVM or Xen, it would in theory be possible to
> build the FUSE client on the host servers, though I am still in doubt
> on how OpenQRM will handle this. When choosing local storage, OpenQRM
> expects raw disks (I think) and creates LVM groups on these disks in
> order to allow snapshotting and backups. OpenQRM would also not know
> this is shared storage.
>
> Does anyone have some insight on a setup like this?
>
> Best regards,
> Koen
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users