Hello everyone,
I'm using Glusterfs 3.3 and I have some difficulties to setup
geo-replication over ssh.
# gluster volume geo-replication test status
MASTER SLAVE STATUS
--------------------------------------------------------------------------------
test ssh://sshux at yval1020:/users/geo-rep
faulty
test file:///users/geo-rep OK
As you can see, the one in a local folder works fine.
This is my config :
Volume Name: test
Type: Replicate
Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: yval1010:/users/exp
Brick2: yval1020:/users/exp
Options Reconfigured:
geo-replication.indexing: on
cluster.eager-lock: on
performance.cache-refresh-timeout: 60
network.ping-timeout: 10
performance.cache-size: 512MB
performance.write-behind-window-size: 256MB
features.quota-timeout: 30
features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB
features.quota: on
nfs.port: 2049
This is the log :
[2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: starting
gsyncd worker
[2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] <top>: syncing:
gluster://localhost:test -> ssh://sshux at yval1020:/users/geo-rep
[2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is
2f0b0eff-6166-4601-8667-6530561eea1c
[2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master with
volume id 2f0b0eff-6166-4601-8667-6530561eea1c ...
[2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call
10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError
[2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] <top>:
FAIL:
Traceback (most recent call last):
File
"/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line
115, in main
main_i()
File
"/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line
365, in main_i
local.service_loop(*[r for r in [remote] if r])
File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py",
line 756, in service_loop
GMaster(self, args[0]).crawl_loop()
File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
143, in crawl_loop
self.crawl()
File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
308, in crawl
xtr0 = self.xtime(path, self.slave)
File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
74, in xtime
xt = rsc.server.xtime(path, self.uuid)
File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py",
line 204, in __call__
return self.ins(self.meth, *a)
File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py",
line 189, in __call__
raise res
OSError: [Errno 95] Operation not supported
Apparently there is some errors with xtime and yet I have extended attribute
activated.
Any help will be gladly appreciated.
Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120731/e2c19601/attachment.html>
Hi Vijay, I used the tarball here : http://download.gluster.org/pub/gluster/glusterfs/LATEST/> Date: Tue, 31 Jul 2012 07:39:51 -0400 > From: vkoppad at redhat.com > To: sokar6012 at hotmail.com > CC: gluster-users at gluster.org > Subject: Re: [Gluster-users] Geo rep fail > > Hi anthony, > > By Glusterfs-3.3 version, you mean this rpm > http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/. > or If you are working with git repo, can you give me branch and Head. > > -Vijaykumar > > ----- Original Message ----- > From: "anthony garnier" <sokar6012 at hotmail.com> > To: gluster-users at gluster.org > Sent: Tuesday, July 31, 2012 2:47:40 PM > Subject: [Gluster-users] Geo rep fail > > > > Hello everyone, > > I'm using Glusterfs 3.3 and I have some difficulties to setup geo-replication over ssh. > > # gluster volume geo-replication test status > MASTER SLAVE STATUS > -------------------------------------------------------------------------------- > test ssh://sshux at yval1020:/users/geo-rep faulty > test file:///users/geo-rep OK > > As you can see, the one in a local folder works fine. > > This is my config : > > Volume Name: test > Type: Replicate > Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: yval1010:/users/exp > Brick2: yval1020:/users/exp > Options Reconfigured: > geo-replication.indexing: on > cluster.eager-lock: on > performance.cache-refresh-timeout: 60 > network.ping-timeout: 10 > performance.cache-size: 512MB > performance.write-behind-window-size: 256MB > features.quota-timeout: 30 > features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB > features.quota: on > nfs.port: 2049 > > > This is the log : > > [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker > [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] <top>: syncing: gluster://localhost:test -> ssh://sshux at yval1020:/users/geo-rep > [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 2f0b0eff-6166-4601-8667-6530561eea1c > [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ... > [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError > [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] <top>: FAIL: > Traceback (most recent call last): > File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main > main_i() > File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i > local.service_loop(*[r for r in [remote] if r]) > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py", line 756, in service_loop > GMaster(self, args[0]).crawl_loop() > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 143, in crawl_loop > self.crawl() > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 308, in crawl > xtr0 = self.xtime(path, self.slave) > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 74, in xtime > xt = rsc.server.xtime(path, self.uuid) > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__ > return self.ins(self.meth, *a) > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__ > raise res > OSError: [Errno 95] Operation not supported > > > Apparently there is some errors with xtime and yet I have extended attribute activated. > Any help will be gladly appreciated. > > Anthony > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120731/f7d24a9e/attachment.html>
Hi Vijay,
Some complementary info :
SLES 11.23.0.26-0.7-xenglusterfs 3.3.0 built on Jul 16 2012 14:28:16Python
2.6.8rsync version 3.0.4OpenSSH_4.3p2, OpenSSL 0.9.8a 11 Oct 2005ssh command
used : ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem <= key of user sshux
I also changed 1 line in gconf.py because I was aving diffiulties with the
control master option and -S option
# cls.ssh_ctl_args = ["-oControlMaster=auto", "-S",
os.path.join(ctld, "gsycnd-ssh-%r@%h:%p")]
cls.ssh_ctl_args = ["-oControlMaster=no"]
Gluster cmd :
# gluster volume geo-replication test ssh://sshux at yval1020:/users/geo-rep
start
Thx for your help.
Anthony
> Date: Tue, 31 Jul 2012 08:18:52 -0400
> From: vkoppad at redhat.com
> To: sokar6012 at hotmail.com
> CC: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Geo rep fail
>
> Thanks anthony, I'll try to reproduce that.
>
> -Vijaykumar
>
> ----- Original Message -----
> From: "anthony garnier" <sokar6012 at hotmail.com>
> To: vkoppad at redhat.com
> Cc: gluster-users at gluster.org
> Sent: Tuesday, July 31, 2012 5:13:13 PM
> Subject: Re: [Gluster-users] Geo rep fail
>
>
>
> Hi Vijay,
>
> I used the tarball here :
http://download.gluster.org/pub/gluster/glusterfs/LATEST/
>
>
>
>
> > Date: Tue, 31 Jul 2012 07:39:51 -0400
> > From: vkoppad at redhat.com
> > To: sokar6012 at hotmail.com
> > CC: gluster-users at gluster.org
> > Subject: Re: [Gluster-users] Geo rep fail
> >
> > Hi anthony,
> >
> > By Glusterfs-3.3 version, you mean this rpm
> > http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/.
> > or If you are working with git repo, can you give me branch and Head.
> >
> > -Vijaykumar
> >
> > ----- Original Message -----
> > From: "anthony garnier" <sokar6012 at hotmail.com>
> > To: gluster-users at gluster.org
> > Sent: Tuesday, July 31, 2012 2:47:40 PM
> > Subject: [Gluster-users] Geo rep fail
> >
> >
> >
> > Hello everyone,
> >
> > I'm using Glusterfs 3.3 and I have some difficulties to setup
geo-replication over ssh.
> >
> > # gluster volume geo-replication test status
> > MASTER SLAVE STATUS
> >
--------------------------------------------------------------------------------
> > test ssh://sshux at yval1020:/users/geo-rep faulty
> > test file:///users/geo-rep OK
> >
> > As you can see, the one in a local folder works fine.
> >
> > This is my config :
> >
> > Volume Name: test
> > Type: Replicate
> > Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c
> > Status: Started
> > Number of Bricks: 1 x 2 = 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: yval1010:/users/exp
> > Brick2: yval1020:/users/exp
> > Options Reconfigured:
> > geo-replication.indexing: on
> > cluster.eager-lock: on
> > performance.cache-refresh-timeout: 60
> > network.ping-timeout: 10
> > performance.cache-size: 512MB
> > performance.write-behind-window-size: 256MB
> > features.quota-timeout: 30
> > features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB
> > features.quota: on
> > nfs.port: 2049
> >
> >
> > This is the log :
> >
> > [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor:
starting gsyncd worker
> > [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] <top>:
syncing: gluster://localhost:test -> ssh://sshux at yval1020:/users/geo-rep
> > [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master
is 2f0b0eff-6166-4601-8667-6530561eea1c
> > [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary
master with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ...
> > [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call
10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError
> > [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception]
<top>: FAIL:
> > Traceback (most recent call last):
> > File
"/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line
115, in main
> > main_i()
> > File
"/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line
365, in main_i
> > local.service_loop(*[r for r in [remote] if r])
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py",
line 756, in service_loop
> > GMaster(self, args[0]).crawl_loop()
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
143, in crawl_loop
> > self.crawl()
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
308, in crawl
> > xtr0 = self.xtime(path, self.slave)
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line
74, in xtime
> > xt = rsc.server.xtime(path, self.uuid)
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line
204, in __call__
> > return self.ins(self.meth, *a)
> > File
"/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line
189, in __call__
> > raise res
> > OSError: [Errno 95] Operation not supported
> >
> >
> > Apparently there is some errors with xtime and yet I have extended
attribute activated.
> > Any help will be gladly appreciated.
> >
> > Anthony
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120801/6c96977e/attachment.html>
Hi Vijay, Thx for your help, I tried with root and it worked ! Many thanks, Anthony> Date: Thu, 2 Aug 2012 02:28:27 -0400 > From: vkoppad at redhat.com > To: sokar6012 at hotmail.com > CC: gluster-users at gluster.org > Subject: Re: [Gluster-users] Geo rep fail > > Hi anthony, > > What I understood from your invocation of geo-rep session is, > you are trying to start geo-rep with slave as a normal-user. > To successfully start geo-rep session , the slave need to be as a super user. > Otherwise if you really want to have slave as a normal-user , you should > set-up geo-rep through Mount-broker, the details of which you can get here, > > http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Slave.html > > Thanks, > Vijaykumar > > ----- Original Message ----- > From: "anthony garnier" <sokar6012 at hotmail.com> > To: vkoppad at redhat.com > Cc: gluster-users at gluster.org > Sent: Wednesday, August 1, 2012 6:28:37 PM > Subject: Re: [Gluster-users] Geo rep fail > > > > Hi Vijay, > > Some complementary info : > > * SLES 11.2 > * 3.0.26-0.7-xen > * glusterfs 3.3.0 built on Jul 16 2012 14:28:16 > * Python 2.6.8 > * rsync version 3.0.4 > * OpenSSH_4.3p2, OpenSSL 0.9.8a 11 Oct 2005 > > > * > ssh command used : ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem <= key of user sshux > > I also changed 1 line in gconf.py because I was aving diffiulties with the control master option and -S option > > # cls.ssh_ctl_args = ["-oControlMaster=auto", "-S", os.path.join(ctld, "gsycnd-ssh-%r@%h:%p")] > cls.ssh_ctl_args = ["-oControlMaster=no"] > > Gluster cmd : > > # gluster volume geo-replication test ssh://sshux at yval1020:/users/geo-rep start > > > Thx for your help. > > Anthony > > > > > Date: Tue, 31 Jul 2012 08:18:52 -0400 > > From: vkoppad at redhat.com > > To: sokar6012 at hotmail.com > > CC: gluster-users at gluster.org > > Subject: Re: [Gluster-users] Geo rep fail > > > > Thanks anthony, I'll try to reproduce that. > > > > -Vijaykumar > > > > ----- Original Message ----- > > From: "anthony garnier" <sokar6012 at hotmail.com> > > To: vkoppad at redhat.com > > Cc: gluster-users at gluster.org > > Sent: Tuesday, July 31, 2012 5:13:13 PM > > Subject: Re: [Gluster-users] Geo rep fail > > > > > > > > Hi Vijay, > > > > I used the tarball here : http://download.gluster.org/pub/gluster/glusterfs/LATEST/ > > > > > > > > > > > Date: Tue, 31 Jul 2012 07:39:51 -0400 > > > From: vkoppad at redhat.com > > > To: sokar6012 at hotmail.com > > > CC: gluster-users at gluster.org > > > Subject: Re: [Gluster-users] Geo rep fail > > > > > > Hi anthony, > > > > > > By Glusterfs-3.3 version, you mean this rpm > > > http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/. > > > or If you are working with git repo, can you give me branch and Head. > > > > > > -Vijaykumar > > > > > > ----- Original Message ----- > > > From: "anthony garnier" <sokar6012 at hotmail.com> > > > To: gluster-users at gluster.org > > > Sent: Tuesday, July 31, 2012 2:47:40 PM > > > Subject: [Gluster-users] Geo rep fail > > > > > > > > > > > > Hello everyone, > > > > > > I'm using Glusterfs 3.3 and I have some difficulties to setup geo-replication over ssh. > > > > > > # gluster volume geo-replication test status > > > MASTER SLAVE STATUS > > > -------------------------------------------------------------------------------- > > > test ssh://sshux at yval1020:/users/geo-rep faulty > > > test file:///users/geo-rep OK > > > > > > As you can see, the one in a local folder works fine. > > > > > > This is my config : > > > > > > Volume Name: test > > > Type: Replicate > > > Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c > > > Status: Started > > > Number of Bricks: 1 x 2 = 2 > > > Transport-type: tcp > > > Bricks: > > > Brick1: yval1010:/users/exp > > > Brick2: yval1020:/users/exp > > > Options Reconfigured: > > > geo-replication.indexing: on > > > cluster.eager-lock: on > > > performance.cache-refresh-timeout: 60 > > > network.ping-timeout: 10 > > > performance.cache-size: 512MB > > > performance.write-behind-window-size: 256MB > > > features.quota-timeout: 30 > > > features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB > > > features.quota: on > > > nfs.port: 2049 > > > > > > > > > This is the log : > > > > > > [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker > > > [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] <top>: syncing: gluster://localhost:test -> ssh://sshux at yval1020:/users/geo-rep > > > [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 2f0b0eff-6166-4601-8667-6530561eea1c > > > [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ... > > > [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError > > > [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] <top>: FAIL: > > > Traceback (most recent call last): > > > File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main > > > main_i() > > > File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i > > > local.service_loop(*[r for r in [remote] if r]) > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py", line 756, in service_loop > > > GMaster(self, args[0]).crawl_loop() > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 143, in crawl_loop > > > self.crawl() > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 308, in crawl > > > xtr0 = self.xtime(path, self.slave) > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 74, in xtime > > > xt = rsc.server.xtime(path, self.uuid) > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__ > > > return self.ins(self.meth, *a) > > > File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__ > > > raise res > > > OSError: [Errno 95] Operation not supported > > > > > > > > > Apparently there is some errors with xtime and yet I have extended attribute activated. > > > Any help will be gladly appreciated. > > > > > > Anthony > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Gluster-users mailing list > > > Gluster-users at gluster.org > > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120802/b1c2860c/attachment.html>