Hi,
Yeah shared storage is needed only for more than 2 nodes to sync the geo rep
status.
If I have some time , I can try to reproduce it if you could provide the gluster
version, operating system and volume options.
Best Regards,
Strahil Nikolov
On Mon, Aug 19, 2024 at 4:45, Karl Kleinpaste<karl at kleinpaste.org>
wrote: On 8/18/24 16:41, Strahil Nikolov wrote:
I don't see anything mentioning shared storage in the docs and I assume
it's now automatic but can you check 'gluster volume get all
cluster.enable-shared-storage' ? I would give a try with RH's
documentation despite it's old it has some steps (like the shared volume)
that might be needed
I appreciate the reply. For the first item:
Option?????????????????????????????????? Value????????????????????????????????
?
------?????????????????????????????????? -----????????????????????????????????
?
cluster.enable-shared-storage??????????? disable (DEFAULT)
I tried turning this on, but apparently it's inapplicable for my test
configuration of 1-brick volumes.
gluster volume set j cluster.enable-shared-storage on
volume set: failed: Not a valid option for single volume
I looked through that documentation. It comes down to the same geo-rep create
command, with the same result.
In any event, I re-ran
gluster-georep-sshkey generate
on both nodes, which worked fine. Then, just as before
gluster volume geo-replication j geoacct at pms::n create ssh-port 6247
push-pem
Please check gsync config file. Unable to get statefile's name
geo-replication command failed
I don't yet see what else I could be doing with this.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240822/a55557b5/attachment.html>
On 8/22/24 14:08, Strahil Nikolov wrote:> I can try to reproduce it if you could provide the gluster version, > operating system and volume options.Most kind. Fedora39,? Packages: $ grep gluster /var/log/rpmpkgs gluster-block-0.5-11.fc39.x86_64.rpm glusterfs-11.1-1.fc39.x86_64.rpm glusterfs-cli-11.1-1.fc39.x86_64.rpm glusterfs-client-xlators-11.1-1.fc39.x86_64.rpm glusterfs-cloudsync-plugins-11.1-1.fc39.x86_64.rpm glusterfs-coreutils-0.3.2-1.fc39.x86_64.rpm glusterfs-events-11.1-1.fc39.x86_64.rpm glusterfs-extra-xlators-11.1-1.fc39.x86_64.rpm glusterfs-fuse-11.1-1.fc39.x86_64.rpm glusterfs-geo-replication-11.1-1.fc39.x86_64.rpm glusterfs-resource-agents-11.1-1.fc39.noarch.rpm glusterfs-server-11.1-1.fc39.x86_64.rpm glusterfs-thin-arbiter-11.1-1.fc39.x86_64.rpm libglusterfs0-11.1-1.fc39.x86_64.rpm libvirt-daemon-driver-storage-gluster-9.7.0-4.fc39.x86_64.rpm python3-gluster-11.1-1.fc39.x86_64.rpm qemu-block-gluster-8.1.3-5.fc39.x86_64.rpm (Somewhere along the way, I'm sure I just did "dnf install *gluster*".) The two volumes were created using the quick start guide: https://docs.gluster.org/en/main/Quick-Start-Guide/Quickstart/ which means that, after establishing peering, I used these simple commands: (on pjs) gluster volume create j pjs:/xx/brick (on pms) gluster volume create n pms:/xx/brick where /xx on these 2 systems are small, spare, otherwise empty, identical filesystems of about 40G, formatted ext4. No other options were used in creation. As I said in my initial note, it seems that the underlying problem (from logged complaints) is lack of ageo-reptemplate configuration from which to set up, and I simply don't know where/how/when that should have been created. But this is just a surmise on my part. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20240822/1cf37aa0/attachment.html>