Alexander Voronin
2010-Oct-05 13:11 UTC
[Gluster-users] Failsafe replication between two DC - getting trash in files.
Hi there. I'm trying to create failsafe distributed FS using gluster using
such scheme as described on this image -
http://img716.imageshack.us/img716/4896/glusterfs.png
As You see any node of FS may be broken down and system will stay stable.
However I still can't make it work. I'm using lates git version of
glusterfs. Here is configs that I'm using on servers and clients. dfs1,
dfs2, dfs3, dfs4 is a canonical names for FS1, FS2, FS3 and FS4 on image and
sure it has DNS arrdesses. Difference between server and client configs is
just in remote-host option in brick2 and brick3 volumes (see image).
What is really strange that the system is stable and looks like working but
I'm getting trash in files when copying them to gluster FS.
#########################################################################
# server config
#########################################################################
volume posix1
type storage/posix
option directory /storage
end-volume
volume brick1
type features/posix-locks
subvolumes posix1
end-volume
# --------------------------
volume brick2
type protocol/client
option transport-type tcp/client
option remote-host dfs3
option remote-subvolume brick1
end-volume
# ---------------------------
volume brick3
type protocol/client
option transport-type tcp/client
option remote-host dfs4
option remote-subvolume brick1
end-volume
# ---------------------------
volume replicate
type cluster/replicate
subvolumes brick1 brick2 brick3
end-volume
volume server
type protocol/server
option transport-type tcp/server
subvolumes replicate
option auth.addr.brick1.allow 192.168.*,127.0.0.1
option auth.addr.replicate.allow 192.168.*,127.0.0.1
end-volume
#########################################################################
# client config
#########################################################################
volume fs1-1
type protocol/client
option transport-type tcp
option remote-host dfs1
option transport.socket.nodelay on
option transport.socket.remote-port 6969
option remote-subvolume replicate
end-volume
volume fs2-1
type protocol/client
option transport-type tcp
option remote-host dfs3
option transport.socket.nodelay on
option transport.socket.remote-port 6969
option remote-subvolume replicate
end-volume
volume mirror-0
type cluster/replicate
subvolumes fs1-1 fs2-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume
volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]
option cache-timeout 1
subvolumes readahead
end-volume
volume writebehind
type performance/write-behind
option cache-size 32MB
subvolumes iocache #quickread
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume
Burnash, James
2010-Oct-05 13:17 UTC
[Gluster-users] Failsafe replication between two DC - getting trash in files.
I'm just guessing, but I don't think you can have the replicate volume
on both the client and the server configs ...
James Burnash, Unix Engineering
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at
gluster.org] On Behalf Of Alexander Voronin
Sent: Tuesday, October 05, 2010 9:12 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] Failsafe replication between two DC - getting trash in
files.
Hi there. I'm trying to create failsafe distributed FS using gluster using
such scheme as described on this image -
http://img716.imageshack.us/img716/4896/glusterfs.png
As You see any node of FS may be broken down and system will stay stable.
However I still can't make it work. I'm using lates git version of
glusterfs. Here is configs that I'm using on servers and clients. dfs1,
dfs2, dfs3, dfs4 is a canonical names for FS1, FS2, FS3 and FS4 on image and
sure it has DNS arrdesses. Difference between server and client configs is just
in remote-host option in brick2 and brick3 volumes (see image).
What is really strange that the system is stable and looks like working but
I'm getting trash in files when copying them to gluster FS.
#########################################################################
# server config
#########################################################################
volume posix1
type storage/posix
option directory /storage
end-volume
volume brick1
type features/posix-locks
subvolumes posix1
end-volume
# --------------------------
volume brick2
type protocol/client
option transport-type tcp/client
option remote-host dfs3
option remote-subvolume brick1
end-volume
# ---------------------------
volume brick3
type protocol/client
option transport-type tcp/client
option remote-host dfs4
option remote-subvolume brick1
end-volume
# ---------------------------
volume replicate
type cluster/replicate
subvolumes brick1 brick2 brick3
end-volume
volume server
type protocol/server
option transport-type tcp/server
subvolumes replicate
option auth.addr.brick1.allow 192.168.*,127.0.0.1
option auth.addr.replicate.allow 192.168.*,127.0.0.1 end-volume
#########################################################################
# client config
#########################################################################
volume fs1-1
type protocol/client
option transport-type tcp
option remote-host dfs1
option transport.socket.nodelay on
option transport.socket.remote-port 6969
option remote-subvolume replicate
end-volume
volume fs2-1
type protocol/client
option transport-type tcp
option remote-host dfs3
option transport.socket.nodelay on
option transport.socket.remote-port 6969
option remote-subvolume replicate
end-volume
volume mirror-0
type cluster/replicate
subvolumes fs1-1 fs2-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume
volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]
option cache-timeout 1
subvolumes readahead
end-volume
volume writebehind
type performance/write-behind
option cache-size 32MB
subvolumes iocache #quickread
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume
DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or confidential
information. If you are not the intended recipient of this e-mail, you are
hereby notified that any dissemination, distribution or copying of this e-mail,
and any attachments thereto, is strictly prohibited. If you have received this
in error, please immediately notify me and permanently delete the original and
any copy of any e-mail and any printout thereof. E-mail transmission cannot be
guaranteed to be secure or error-free. The sender therefore does not accept
liability for any errors or omissions in the contents of this message which
arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its
discretion, monitor and review the content of all e-mail communications.
http://www.knight.com
Alexander Voronin
2010-Oct-05 13:29 UTC
[Gluster-users] Failsafe replication between two DC - getting trash in files.
Mailing list seems to be very slow. I still did not get answer from James Burnash. So wh not have replication on client and server side? It's seems to be logical, I've just extended this sample http://www.gluster.com/community/documentation/index.php/Setting_up_AFR_on_two_servers_with_server_side_replication -- ????? ? ????????? ?? ????? ???, ????? ??? ?????????..