Gilberto Ferreira
2025-Nov-22 19:17 UTC
[Gluster-users] Failed to Establish Geo-replication Session Please check gsync config file. Unable to get statefile's name
Hi
I had succeeded in creating the session with the other side.
But I got a Faulty status
I the gsyncd.log I got this:
Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
--ignore-missing-args . -e ssh -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
-p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-2j2yeofa/5afb71218219138854b3c5a8eab300a4.sock
-caes128-ctr gluster3:/proc/2662/cwd}, {error=12}]
This also happens with Gluster 12dev.
After compiling the gluster 12dev, I successfully create a geo-rep session
but get the error above.
Any clue?
Best Regards
Em s?b., 22 de nov. de 2025 ?s 15:12, Strahil Nikolov <hunter86_bg at
yahoo.com>
escreveu:
> Hi Gilberto,
>
> It should as long as it's the same problem.
>
> It will be nice to share your experience in the mailing list.
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Nov 22, 2025 at 18:13, Gilberto Ferreira
> <gilberto.nunes32 at gmail.com> wrote:
> Hi
>
> Should it work with Debian Trixie (13) as well?
>
> I will try it
>
>
>
> ---
>
>
> Gilberto Nunes Ferreira
> +55 (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em s?b., 22 de nov. de 2025 ?s 12:37, Strahil Nikolov <
> hunter86_bg at yahoo.com> escreveu:
>
> Hi Gilberto,
>
> I think debian12 packages don't have
>
https://github.com/gluster/glusterfs/pull/4404/commits/c433a178e8208e1771fea4d61d0a22a95b8bc74b
>
> Run on source and destination this command and try again:
> sed -i 's/readfp/read_file/g'
> /usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncdconfig.py
>
> In my test setup where source and destinations are each a single debian 12
> with a hackishl created gluster_shared_storage volume and executing the
> sed, I got:
>
>
> # gluster volume geo-replication vol1 geoaccount at gluster2::georep create
> push-pem
> Creating geo-replication session between vol1 & geoaccount at
gluster2::georep
> has been successful
>
>
> Best Regards,
> Strahil Nikolov
>
> ? ??????, 22 ??????? 2025 ?. ? 16:34:30 ?. ???????+2, Gilberto Ferreira
<
> gilberto.nunes32 at gmail.com> ??????:
>
>
> here the story about gluster 12dev faulty error:
>
> https://github.com/gluster/glusterfs/issues/4632
>
>
>
>
>
> Em s?b., 22 de nov. de 2025 ?s 10:59, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> escreveu:
>
> Hello there
> My testing was with proxmox 9 which has Debian 13.
> I tried with gluster 11.1 from Debian repo and then version 11.2 from git
> repo.
> I got statefile's name issue with both.
> Then I compiled the version 12dev and could create geo-replication session
> with success but got faulty status
>
> So that's it.
>
> ---
> Gilberto Nunes Ferreira
> +55 (47) 99676-7530
> Proxmox VE
>
> Em s?b., 22 de nov. de 2025, 10:14, Strahil Nikolov <hunter86_bg at
yahoo.com>
> escreveu:
>
> Hi Gilberto,
>
> What version of os and gluster do you use exaclty ?
>
> Best Regards,
> Strahil Nikolov
>
> ? ?????, 21 ??????? 2025 ?. ? 14:08:19 ?. ???????+2, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> ??????:
>
>
> Hello there
>
> If there is something else I could help, please let me know.
>
> Thanks
>
> Best Regards
>
>
>
>
>
>
> Em qua., 19 de nov. de 2025 ?s 15:21, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> escreveu:
>
> Hi there
>
> So there is no special script.
> First I try to using this:
> https://github.com/aravindavk/gluster-georep-tools, and then I notice the
> issue.
> But after try do it by myself, I called for help.
> I tried:
>
> gluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol create
push-pem
>
> gluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol start
>
>
> And got the issue.
>
>
> Thanks
>
>
> ---
>
>
> Gilberto Nunes Ferreira
> +55 (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em qua., 19 de nov. de 2025 ?s 15:09, Strahil Nikolov <
> hunter86_bg at yahoo.com> escreveu:
>
> Hi Gilberto,
>
> I have no idea why my previous message was not sent (sorry about that).
> I suspect it's a bug. If you have some script or ansible playbook for
the
> setup, it could help me reproduce it locally.
>
>
> Best Regards,
> Strahil Nikolov
>
> ? ???????, 11 ??????? 2025 ?. ? 16:55:59 ?. ???????+2, Gilberto Ferreira
<
> gilberto.nunes32 at gmail.com> ??????:
>
>
> Any clue about this issue?
>
>
>
>
>
>
> Em seg., 10 de nov. de 2025 ?s 14:47, Gilberto Ferreira <
> gilberto.nunes32 at gmail.com> escreveu:
>
> I still don't get it, because with Debian BookWorm and Gluster 10,
geo-rep
> works perfectly.
> It's something about Trixie and Gluster 11.x.
>
>
>
>
>
> Em seg., 10 de nov. de 2025 ?s 14:45, Karl Kleinpaste <karl at
kleinpaste.org>
> escreveu:
>
> On 11/10/25 12:21 PM, Gilberto Ferreira wrote:
>
> And yes. With gluster 11.2 from github repo, the very some error:
> gluster vol geo VMS gluster3::VMS-REP create push-pem
> Please check gsync config file. Unable to get statefile's name
> geo-replication command failed
>
>
> I had this problem a year ago, Aug 2024
>
<https://lists.gluster.org/pipermail/gluster-users/2024-August/040625.html>.
> I went rounds and rounds with Strahil for a week, trying to find why I
> couldn't cross the finish line of successful georep. It always ends in:
>
> Please check gsync config file. Unable to get statefile's name
> geo-replication command failed
>
> The volumes were set up properly, the commands for georep were done
> correctly, per guidelines, but georep was left forever in a state of
> Created, never Active.
>
> Finally I just gave up. I can't use gluster if it won't work with
me. I
> found that gluster would not give enough adequate diagnostics to provide a
(
> *useful*!) explanation of what is actually wrong.
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20251122/ce236482/attachment.html>
Gilberto Ferreira
2025-Nov-22 19:20 UTC
[Gluster-users] Failed to Establish Geo-replication Session Please check gsync config file. Unable to get statefile's name
Here the log about the fault status
[2025-11-22 19:18:56.478297] I [gsyncdstatus(worker
/mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status
Change [{status=History Crawl}]
[2025-11-22 19:18:56.478521] I [primary(worker
/mnt/pve/data1/vms):1572:crawl] _GPrimary: starting history crawl
[{turns=1}, {stime=(1763838427, 0)}, {etime=1763839136},
{entry_stime=(1763838802, 0)}]
[2025-11-22 19:18:57.479278] I [primary(worker
/mnt/pve/data1/vms):1604:crawl] _GPrimary: secondary's time
[{stime=(1763838427, 0)}]
[2025-11-22 19:18:57.922752] I [primary(worker
/mnt/pve/data1/vms):2009:syncjob] Syncer: Sync Time Taken [{job=1},
{num_files=2}, {return_code=12}, {duration=0.0272}]
[2025-11-22 19:18:57.922921] E [syncdutils(worker
/mnt/pve/data1/vms):845:errlog] Popen: command returned error [{cmd=rsync
-aR0 --inplace --files-from=- --super --stats --numeric-ids
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-c9h5okjo/5afb71218219138854b3c5a8eab300a4.sock
-caes128-ctr gluster3:/proc/4004/cwd}, {error=12}]
[2025-11-22 19:18:58.394410] I [monitor(monitor):227:monitor] Monitor:
worker died in startup phase [{brick=/mnt/pve/data1/vms}]
[2025-11-22 19:18:58.406208] I
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status
Change [{status=Faulty}]
[2025-11-22 19:19:08.408871] I
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status
Change [{status=Initializing...}]
[2025-11-22 19:19:08.408998] I [monitor(monitor):158:monitor] Monitor:
starting gsyncd worker [{brick=/mnt/pve/data1/vms},
{secondary_node=gluster3}]
[2025-11-22 19:19:08.475625] I [resource(worker
/mnt/pve/data1/vms):1388:connect_remote] SSH: Initializing SSH connection
between primary and secondary...
[2025-11-22 19:19:09.719658] I [resource(worker
/mnt/pve/data1/vms):1436:connect_remote] SSH: SSH connection between
primary and secondary established. [{duration=1.2439}]
[2025-11-22 19:19:09.719800] I [resource(worker
/mnt/pve/data1/vms):1117:connect] GLUSTER: Mounting gluster volume
locally...
[2025-11-22 19:19:10.740213] I [resource(worker
/mnt/pve/data1/vms):1139:connect] GLUSTER: Mounted gluster volume
[{duration=1.0203}]
[2025-11-22 19:19:10.740427] I [subcmds(worker
/mnt/pve/data1/vms):84:subcmd_worker] <top>: Worker spawn successful.
Acknowledging back to monitor
[2025-11-22 19:19:12.756579] I [primary(worker
/mnt/pve/data1/vms):1661:register] _GPrimary: Working dir
[{path=/var/lib/misc/gluster/gsyncd/VMS_gluster3_VMS-REP/mnt-pve-data1-vms}]
[2025-11-22 19:19:12.756854] I [resource(worker
/mnt/pve/data1/vms):1292:service_loop] GLUSTER: Register time
[{time=1763839152}]
[2025-11-22 19:19:12.771767] I [gsyncdstatus(worker
/mnt/pve/data1/vms):280:set_active] GeorepStatus: Worker Status Change
[{status=Active}]
[2025-11-22 19:19:12.834163] I [gsyncdstatus(worker
/mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status
Change [{status=History Crawl}]
[2025-11-22 19:19:12.834344] I [primary(worker
/mnt/pve/data1/vms):1572:crawl] _GPrimary: starting history crawl
[{turns=1}, {stime=(1763838427, 0)}, {etime=1763839152},
{entry_stime=(1763838802, 0)}]
[2025-11-22 19:19:13.835162] I [primary(worker
/mnt/pve/data1/vms):1604:crawl] _GPrimary: secondary's time
[{stime=(1763838427, 0)}]
[2025-11-22 19:19:14.270295] I [primary(worker
/mnt/pve/data1/vms):2009:syncjob] Syncer: Sync Time Taken [{job=1},
{num_files=2}, {return_code=12}, {duration=0.0274}]
[2025-11-22 19:19:14.270466] E [syncdutils(worker
/mnt/pve/data1/vms):845:errlog] Popen: command returned error [{cmd=rsync
-aR0 --inplace --files-from=- --super --stats --numeric-ids
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-1retjpoh/5afb71218219138854b3c5a8eab300a4.sock
-caes128-ctr gluster3:/proc/4076/cwd}, {error=12}]
[2025-11-22 19:19:14.741245] I [monitor(monitor):227:monitor] Monitor:
worker died in startup phase [{brick=/mnt/pve/data1/vms}]
[2025-11-22 19:19:14.752452] I
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status
Change [{status=Faulty}]
It seems to me that it failed to open something via ssh session.
I don't know... something like that.
Em s?b., 22 de nov. de 2025 ?s 16:17, Gilberto Ferreira <
gilberto.nunes32 at gmail.com> escreveu:
> Hi
>
> I had succeeded in creating the session with the other side.
> But I got a Faulty status
> I the gsyncd.log I got this:
>
> Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=-
> --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
> --ignore-missing-args . -e ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
> -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-2j2yeofa/5afb71218219138854b3c5a8eab300a4.sock
> -caes128-ctr gluster3:/proc/2662/cwd}, {error=12}]
>
>
> This also happens with Gluster 12dev.
>
> After compiling the gluster 12dev, I successfully create a geo-rep session
> but get the error above.
>
>
> Any clue?
>
>
> Best Regards
>
>
>
>
>
> Em s?b., 22 de nov. de 2025 ?s 15:12, Strahil Nikolov <
> hunter86_bg at yahoo.com> escreveu:
>
>> Hi Gilberto,
>>
>> It should as long as it's the same problem.
>>
>> It will be nice to share your experience in the mailing list.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Sat, Nov 22, 2025 at 18:13, Gilberto Ferreira
>> <gilberto.nunes32 at gmail.com> wrote:
>> Hi
>>
>> Should it work with Debian Trixie (13) as well?
>>
>> I will try it
>>
>>
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em s?b., 22 de nov. de 2025 ?s 12:37, Strahil Nikolov <
>> hunter86_bg at yahoo.com> escreveu:
>>
>> Hi Gilberto,
>>
>> I think debian12 packages don't have
>>
https://github.com/gluster/glusterfs/pull/4404/commits/c433a178e8208e1771fea4d61d0a22a95b8bc74b
>>
>> Run on source and destination this command and try again:
>> sed -i 's/readfp/read_file/g'
>> /usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncdconfig.py
>>
>> In my test setup where source and destinations are each a single debian
>> 12 with a hackishl created gluster_shared_storage volume and executing
>> the sed, I got:
>>
>>
>> # gluster volume geo-replication vol1 geoaccount at gluster2::georep
create
>> push-pem
>> Creating geo-replication session between vol1 & geoaccount at
gluster2::georep
>> has been successful
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> ? ??????, 22 ??????? 2025 ?. ? 16:34:30 ?. ???????+2, Gilberto Ferreira
<
>> gilberto.nunes32 at gmail.com> ??????:
>>
>>
>> here the story about gluster 12dev faulty error:
>>
>> https://github.com/gluster/glusterfs/issues/4632
>>
>>
>>
>>
>>
>> Em s?b., 22 de nov. de 2025 ?s 10:59, Gilberto Ferreira <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>> Hello there
>> My testing was with proxmox 9 which has Debian 13.
>> I tried with gluster 11.1 from Debian repo and then version 11.2 from
git
>> repo.
>> I got statefile's name issue with both.
>> Then I compiled the version 12dev and could create geo-replication
>> session with success but got faulty status
>>
>> So that's it.
>>
>> ---
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530
>> Proxmox VE
>>
>> Em s?b., 22 de nov. de 2025, 10:14, Strahil Nikolov <
>> hunter86_bg at yahoo.com> escreveu:
>>
>> Hi Gilberto,
>>
>> What version of os and gluster do you use exaclty ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> ? ?????, 21 ??????? 2025 ?. ? 14:08:19 ?. ???????+2, Gilberto Ferreira
<
>> gilberto.nunes32 at gmail.com> ??????:
>>
>>
>> Hello there
>>
>> If there is something else I could help, please let me know.
>>
>> Thanks
>>
>> Best Regards
>>
>>
>>
>>
>>
>>
>> Em qua., 19 de nov. de 2025 ?s 15:21, Gilberto Ferreira <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>> Hi there
>>
>> So there is no special script.
>> First I try to using this:
>> https://github.com/aravindavk/gluster-georep-tools, and then I notice
>> the issue.
>> But after try do it by myself, I called for help.
>> I tried:
>>
>> gluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol
create push-pem
>>
>> gluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol
start
>>
>>
>> And got the issue.
>>
>>
>> Thanks
>>
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em qua., 19 de nov. de 2025 ?s 15:09, Strahil Nikolov <
>> hunter86_bg at yahoo.com> escreveu:
>>
>> Hi Gilberto,
>>
>> I have no idea why my previous message was not sent (sorry about that).
>> I suspect it's a bug. If you have some script or ansible playbook
for the
>> setup, it could help me reproduce it locally.
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> ? ???????, 11 ??????? 2025 ?. ? 16:55:59 ?. ???????+2, Gilberto
Ferreira <
>> gilberto.nunes32 at gmail.com> ??????:
>>
>>
>> Any clue about this issue?
>>
>>
>>
>>
>>
>>
>> Em seg., 10 de nov. de 2025 ?s 14:47, Gilberto Ferreira <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>> I still don't get it, because with Debian BookWorm and Gluster 10,
>> geo-rep works perfectly.
>> It's something about Trixie and Gluster 11.x.
>>
>>
>>
>>
>>
>> Em seg., 10 de nov. de 2025 ?s 14:45, Karl Kleinpaste <
>> karl at kleinpaste.org> escreveu:
>>
>> On 11/10/25 12:21 PM, Gilberto Ferreira wrote:
>>
>> And yes. With gluster 11.2 from github repo, the very some error:
>> gluster vol geo VMS gluster3::VMS-REP create push-pem
>> Please check gsync config file. Unable to get statefile's name
>> geo-replication command failed
>>
>>
>> I had this problem a year ago, Aug 2024
>>
<https://lists.gluster.org/pipermail/gluster-users/2024-August/040625.html>.
>> I went rounds and rounds with Strahil for a week, trying to find why I
>> couldn't cross the finish line of successful georep. It always ends
in:
>>
>> Please check gsync config file. Unable to get statefile's name
>> geo-replication command failed
>>
>> The volumes were set up properly, the commands for georep were done
>> correctly, per guidelines, but georep was left forever in a state of
>> Created, never Active.
>>
>> Finally I just gave up. I can't use gluster if it won't work
with me. I
>> found that gluster would not give enough adequate diagnostics to
provide a (
>> *useful*!) explanation of what is actually wrong.
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20251122/5c86dbb2/attachment.html>