Matthew Benstead
2019-Jun-28 18:19 UTC
[Gluster-users] Geo-Replication Changelog Error - is a directory
Hello, I'm having some issues with successfully establishing a geo-repolication session between a 7-server distribute cluster as the primary volume, and a 2 server distribute cluster as the secondary volume. Both are running the same version of gluster on CentOS 7: glusterfs-5.3-2.el7.x86_64. I was able to setup the replication keys, user, groups, etc and establish the session, but it goes faulty quickly after starting. The error from the gsyncd.log is: Changelog register failed error=[Errno 21] Is a directory We made an attempt about 2 years ago to configure geo-replication but abandoned it, now with a new cluster I wanted to get it setup, but it looks like changelogs have been accumulating since then: [root at gluster07 .glusterfs]# ls -lh changelogs > /var/tmp/changelogs.txt [root at gluster07 ~]# head /var/tmp/changelogs.txt total 11G -rw-r--r--. 1 root root 130 Jun 27 13:48 CHANGELOG -rw-r--r--. 1 root root 2.6K Jun 19 2017 CHANGELOG.1497891971 -rw-r--r--. 1 root root 470 Jun 19 2017 CHANGELOG.1497892055 -rw-r--r--. 1 root root 186 Jun 19 2017 CHANGELOG.1497892195 -rw-r--r--. 1 root root 458 Jun 19 2017 CHANGELOG.1497892308 -rw-r--r--. 1 root root 188 Jun 19 2017 CHANGELOG.1497892491 -rw-r--r--. 1 root root 862 Jun 19 2017 CHANGELOG.1497892828 -rw-r--r--. 1 root root 11K Jun 19 2017 CHANGELOG.1497892927 -rw-r--r--. 1 root root 4.4K Jun 19 2017 CHANGELOG.1497892941 [root at gluster07 ~]# tail /var/tmp/changelogs.txt -rw-r--r--. 1 root root 130 Jun 27 13:47 CHANGELOG.1561668463 -rw-r--r--. 1 root root 130 Jun 27 13:47 CHANGELOG.1561668477 -rw-r--r--. 1 root root 130 Jun 27 13:48 CHANGELOG.1561668491 -rw-r--r--. 1 root root 130 Jun 27 13:48 CHANGELOG.1561668506 -rw-r--r--. 1 root root 130 Jun 27 13:48 CHANGELOG.1561668521 -rw-r--r--. 1 root root 130 Jun 27 13:48 CHANGELOG.1561668536 -rw-r--r--. 1 root root 130 Jun 27 13:49 CHANGELOG.1561668550 -rw-r--r--. 1 root root 130 Jun 27 13:49 CHANGELOG.1561668565 drw-------. 2 root root 10 Jun 19 2017 csnap drw-------. 2 root root 37 Jun 19 2017 htime Could this be related? When deleting the replication session I made sure to try the 'delete reset-sync-time' option, but it failed with: gsyncd failed to delete session info for storage and 10.0.231.81::pcic-backup peers geo-replication command failed Here is the volume info: [root at gluster07 ~]# gluster volume info storage Volume Name: storage Type: Distribute Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2 Status: Started Snapshot Count: 0 Number of Bricks: 7 Transport-type: tcp Bricks: Brick1: 10.0.231.50:/mnt/raid6-storage/storage Brick2: 10.0.231.51:/mnt/raid6-storage/storage Brick3: 10.0.231.52:/mnt/raid6-storage/storage Brick4: 10.0.231.53:/mnt/raid6-storage/storage Brick5: 10.0.231.54:/mnt/raid6-storage/storage Brick6: 10.0.231.55:/mnt/raid6-storage/storage Brick7: 10.0.231.56:/mnt/raid6-storage/storage Options Reconfigured: features.quota-deem-statfs: on features.read-only: off features.inode-quota: on features.quota: on performance.readdir-ahead: on nfs.disable: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on transport.address-family: inet Any ideas? Thanks, -Matthew