My understanding is that to handle failures you need to use
distributed + replica when creating volumes. Updates are tricky since
instead of saving entire content of file only pointer might be saved
so in that case if file is not there already it may not work. For that
reason I think hashing is mostly static mechanism.
Gluster team please correct me if I am wrong.
On Wed, Mar 16, 2011 at 8:20 PM, HU Zhong <hz02ruc at gmail.com>
wrote:> hi,
>
> I started 4 glusterfsd process to export 4 bricks, and ?1 glusterfs process
to mount them in a directory.
> If i write a file named "KAKA" into the mountpoint, it's
hashed to server 2 and the
> write returns ok. Then if i kill the process of server 2, and write a file
named "KAKA" again into the mountpoint
> ?to test whether the glusterfs can hash the file to one of the rest 3
servers. but the write operation got a "Transport endpoint is not
connected"
> error in command line. the log of client process shows that it also tried
to hash file "KAKA" to server 2,not
> one of the rest 3 servers. is this the expected result?
>
> Actually, ?I expected the glusterfs can write the file "KAKA" to
one of the rest 3 servers. so i can test
> when the process of server 2 is restarted, how glusterfs handle the
duplicate files.
>
> Anyone can help me ? thanks in advance!
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>