Which version of GlusterFS and python are you running?
Thanks,
-Venky
----- Original Message -----> From: "Loc Luu" <apdx_tanloc2k5 at yahoo.com>
> To: gluster-users at gluster.org
> Sent: Friday, October 5, 2012 9:39:43 AM
> Subject: [Gluster-users] Geo-replication fail
>
> Dear all,
> I setup a Geo-replication manually:
> [root at localhost ~]# gluster volume geo-replication apache-replicate
> file:///data/export start
> Starting geo-replication session between apache-replicate &
> file:///data/export
> has been successful
> [root at localhost ~]# gluster volume geo-replication apache-replicate
> file:///data/export status
> MASTER SLAVE STATUS
> ----------------------------------
> apache-replicate file:///data/export faulty
> [root at localhost ~]# gluster volume geo-replication apache-replicate
> file:///data/export config
> log_level: DEBUG
> gluster_log_file:
> /usr/local/var/log/glusterfs/geo-replication/apache-
> replicate/file%3A%2F%2F%2Fdata%2Fexport.gluster.log
> ssh_command: ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem
> session_owner: e12b5e8c-7191-415e-8e44-3aa92c5bab32
> remote_gsyncd: /usr/local/libexec/glusterfs/gsyncd
> state_file: /var/lib/glusterd/geo-replication/apache-
> replicate/file%3A%2F%2F%2Fdata%2Fexport.status
> gluster_command_dir: /usr/local/sbin/
> pid_file: /var/lib/glusterd/geo-replication/apache-
> replicate/file%3A%2F%2F%2Fdata%2Fexport.pid
> log_file: /usr/local/var/log/glusterfs/geo-replication/apache-
> replicate/file%3A%2F%2F%2Fdata%2Fexport.log
> gluster_params: xlator-option=*-dht.assert-no-child-down=true
>
> The log file has the following entry:
>
> 2012-10-05 10:41:24.454405] I [monitor(monitor):80:monitor] Monitor:
> -----------
> -------------------------------------------------
> [2012-10-05 10:41:24.454719] I [monitor(monitor):81:monitor] Monitor:
> starting
> gsyncd worker
> [2012-10-05 10:41:24.556736] I [gsyncd:354:main_i] <top>: syncing:
> gluster://localhost:apache-replicate -> file:///data/export
> [2012-10-05 10:41:24.576204] D [repce:175:push] RepceClient: call
> 28550:47285057499504:1349408484.58 __repce_version__() ...
> [2012-10-05 10:41:24.802949] D [repce:190:__call__] RepceClient: call
> 28550:47285057499504:1349408484.58 __repce_version__ -> 1.0
> [2012-10-05 10:41:24.803200] D [repce:175:push] RepceClient: call
> 28550:47285057499504:1349408484.8 version() ...
> [2012-10-05 10:41:24.805274] D [repce:190:__call__] RepceClient: call
> 28550:47285057499504:1349408484.8 version -> 1.0
> [2012-10-05 10:41:24.879362] D [resource:667:inhibit] DirectMounter:
> auxiliary
> glusterfs mount in place
> [2012-10-05 10:41:26.560688] E [syncdutils:184:exception] <top>:
> FAIL:
> Traceback (most recent call last):
> File
> "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> line 210,
> in twrap
> tf(*aa)
> File
"/usr/local/libexec/glusterfs/python/syncdaemon/resource.py",
> line 127,
> in tailer
> next
> NameError: global name 'next' is not defined
> [2012-10-05 10:41:26.563252] I [syncdutils:142:finalize] <top>:
> exiting.
> [2012-10-05 10:41:26.566737] D [monitor(monitor):94:monitor] Monitor:
> worker
> died before establishing connection
>
> I tried everything, i checked gsyncd location, but it dont work.
>
> Do you have suggestion? Any help is appreciate. Many thanks!
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>