It will mount the server1(192.168.1.1) exported directory to client machine mount point volume brick2 type protocol/client option transport-type tcp option remote-host 192.168.1.2 option remote-subvolume brick end-volume It will mount the server2 (192.168.1.2) exported directory to client machine mount point volume brick3 type protocol/client option transport-type tcp option remote-host 192.168.1.3 option remote-subvolume brick end-volume It will mount the server3 (192.168.1.3) exported directory to client machine mount point volume brick4 type protocol/client option transport-type tcp option remote-host 192.168.1.4 option remote-subvolume brick end-volume It will mount the server4 (192.168.1.4) exported directory to client machine mount point volume brick5 type protocol/client option transport-type tcp option remote-host 192.168.1.5 option remote-subvolume brick end-volume It will mount the server5 (192.168.1.5) exported directory to client machine mount point volume brick6 type protocol/client option transport-type tcp option remote-host 192.168.1.6 option remote-subvolume brick end-volume It will mount the server6 (192.168.1.6) exported directory to client machine mount point volume brick-ns1 type protocol/client option transport-type tcp option remote-host 192.168.1.1 option remote-subvolume brick-ns *# Note the different remote volume name.* end-volume It will mount the server1(192.168.1.1) exported directory (/home/export-ns/ ) to client machine mount point volume brick-ns2 type protocol/client option transport-type tcp option remote-host 192.168.1.2 option remote-subvolume brick-ns *# Note the different remote volume name.* end-volume It will mount the server2(192.168.1.2) exported directory (/home/export-ns/ ) to client machine mount point volume afr1 type cluster/afr subvolumes brick1 brick4 end-volume Here brick1 replicates all files to brick4 ,Is it correct? volume afr2 type cluster/afr subvolumes brick2 brick5 end-volume volume afr3 type cluster/afr subvolumes brick3 brick6 end-volume volume afr-ns type cluster/afr subvolumes brick-ns1 brick-ns2 end-volume Here the namespace information are replicating .Is it correc? volume unify type cluster/unify option namespace afr-ns option scheduler rr subvolumes afr1 afr2 afr3 end-volume what actuly unify does here? what is the meaning of namespace in GlusterFS? what about storage scalibality in this design? both server and client. can you please give one example ? how can do HA +unify design with multiply server with multiple client?for example 8 server two client . any one please help me to understand those and correct me . Thanks for your time L. Mohan --0016e65206021798b90461268cb2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Dear All ,<br><br>I am trying to design high available and cluster set up for my benchmarking .Today I read some design information available in GlusterFS home page .<br> <br><br><a href=3D"http://www.gluster.org/docs/index.php/Simple_High_Availability_Storage_with_GlusterFS_2.0#Larger_storage_using_Unify_.2B_AFR">http://www.gluster.org/docs/index.php/Simple_High_Availability_Storage_with_GlusterFS_2.0#Larger_storage_using_Unify_.2B_AFR</a><br> <br> It is configured using 6 server single client .server 1 and server 2 has two directory /export and /export-ns .<br><br><pre>volume brick1<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.1 # IP address of the remote brick<br> option remote-subvolume brick # name of the remote volume<br>end-volume<br><br>From this I understand that<br>It will mount the server1(192.168.1.1) exported directory to client machine mount point<br><br>volume brick2<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.2<br> option remote-subvolume brick<br>end-volume<br><br>It will mount the server2 (192.168.1.2) exported directory to client machine mount point<br> <br><br><br><br>volume brick3<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.3<br> option remote-subvolume brick<br>end-volume<br><br>It will mount the server3 (192.168.1.3) exported directory to client machine mount point<br> <br>volume brick4<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.4<br> option remote-subvolume brick<br>end-volume<br><br>It will mount the server4 (192.168.1.4) exported directory to client machine mount point<br> <br>volume brick5<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.5<br> option remote-subvolume brick<br>end-volume<br><br>It will mount the server5 (192.168.1.5) exported directory to client machine mount point<br> <br>volume brick6<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.6<br> option remote-subvolume brick<br>end-volume<br><br>It will mount the server6 (192.168.1.6) exported directory to client machine mount point<br> <br>volume brick-ns1<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.1<br> option remote-subvolume brick-ns <b># Note the different remote volume name.</b><br>end-volume<br><br>It will mount the server1(192.168.1.1) exported directory (/home/export-ns/<br> ) to client machine mount point<br><br><br>volume brick-ns2<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.1.2<br> option remote-subvolume brick-ns <b># Note the different remote volume name.</b><br> end-volume<br><br>It will mount the server2(192.168.1.2) exported directory (/home/export-ns/<br>) to client machine mount point<br><br><br><br>volume afr1<br> type cluster/afr<br> subvolumes brick1 brick4 <br>end-volume<br> <br>Here brick1 replicates all files to brick4 ,Is it correct? <br><br>volume afr2<br> type cluster/afr<br> subvolumes brick2 brick5<br>end-volume<br><br>volume afr3<br> type cluster/afr<br> subvolumes brick3 brick6<br>end-volume<br> <br>volume afr-ns<br> type cluster/afr<br> subvolumes brick-ns1 brick-ns2<br>end-volume<br><br>Here the namespace information are replicating .Is it correc?<br><br>volume unify<br> type cluster/unify<br> option namespace afr-ns <br> option scheduler rr<br> subvolumes afr1 afr2 afr3<br>end-volume<br></pre>what actuly unify does here?<br>what is the meaning of namespace in GlusterFS?<br>what about storage scalibality in this design? both server and client. can you please give one example ?<br> how can do HA +unify design with multiply server with multiple client?for example 8 server two client . <br><br><br> any one please help me to understand those and correct me .<br><br>Thanks for your time <br> L. Mohan <br> <br><br><br><br><br><br> <br> --0016e65206021798b90461268cb2--