Hello at all, I'm trying the gluster fs replication for my cluster in HA but I do not have understood an thing. For autoreplicate the file accross all nodes, what is the directory that I must use for work? This is my setup ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [root at node1 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 850M 19G 5% / /dev/root 21G 850M 19G 5% / none 1.1G 316k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 298M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node2 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 849M 19G 5% / /dev/root 21G 849M 19G 5% / none 1.1G 308k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 297M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node1 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 [root at node2 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I must work on /mnt mount or into /export/brick1? If I work on /export/brick1 I receive "I/O error" Is correct my setup? Thanks you so much. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130208/7c69dd44/attachment.html>
Hello at all, I'm trying the gluster fs replication for my cluster in HA but I do not have understood an thing. For autoreplicate the file accross all nodes, what is the directory that I must use for work? This is my setup ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [root at node1 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 850M 19G 5% / /dev/root 21G 850M 19G 5% / none 1.1G 316k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 298M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node2 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 849M 19G 5% / /dev/root 21G 849M 19G 5% / none 1.1G 308k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 297M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node1 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 [root at node2 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I must work on /mnt mount or into /export/brick1? If I work on /export/brick1 I receive "I/O error" Is correct my setup? Thanks you so much. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130208/b3d62699/attachment.html>
Yes, everything must go through the mount point. Everything will automatically replicate to both bricks. VHosting Solution <service at vhosting-it.com> wrote: Hello at all, I'm trying the gluster fs replication for my cluster in HA but I do not have understood an thing. For autoreplicate the file accross all nodes, what is the directory that I must use for work? This is my setup ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [root at node1 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 850M 19G 5% / /dev/root 21G 850M 19G 5% / none 1.1G 316k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 298M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node2 ~]# df -H Filesystem Size Used Avail Use% Mounted on rootfs 21G 849M 19G 5% / /dev/root 21G 849M 19G 5% / none 1.1G 308k 1.1G 1% /dev tmpfs 1.1G 0 1.1G 0% /dev/shm /dev/sda2 479G 297M 479G 1% /export/brick1 localhost:/gv0 479G 298M 479G 1% /mnt [root at node1 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 [root at node2 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 9967e4bb-48bd-43e0-ae25-d985df935ea3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node2.xxx.com:/export/brick1 Brick2: node1.xxx.com:/export/brick1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I must work on /mnt mount or into /export/brick1? If I work on /export/brick1 I receive "I/O error" Is correct my setup? Thanks you so much. Regards
On Fri, Feb 08, 2013 at 12:33:16AM +0100, VHosting Solution wrote:> For autoreplicate the file accross all nodes, what is the directory > that I must use for work?You must mount the volume on a client, and work with it there.