Displaying 20 results from an estimated 500 matches similar to: "geo-rep will not initialize"
2024 Aug 18
1
geo-rep will not initialize
Hi Karl,
I don't see anything mentioning shared storage in the docs and I assume it's now automatic but can you check 'gluster volume get all cluster.enable-shared-storage' ?
I would give a try with RH's documentation despite it's old it has some steps (like the shared volume) that might be needed:
2024 Aug 30
1
geo-rep will not initialize
On 8/30/24 04:17, Strahil Nikolov wrote:
> Have you done the following setup on the receiving gluster volume:
Yes. For completeness' sake:
grep geoacct /etc/passwd /etc/group
/etc/passwd:geoacct:x:5273:5273:gluster
geo-replication:/var/lib/glusterd/geoacct:/bin/bash
/etc/group:geoacct:x:5273:
gluster-mountbroker status
2024 Aug 19
1
geo-rep will not initialize
On 8/18/24 16:41, Strahil Nikolov wrote:
> I don't see anything mentioning shared storage in the docs and I
> assume it's now automatic but can you check 'gluster volume get all
> cluster.enable-shared-storage' ?
> I would give a try with RH's documentation despite it's old it has
> some steps (like the shared volume) that might be needed
I appreciate the
2024 Aug 22
1
geo-rep will not initialize
Hi,
Yeah shared storage is needed only for more than 2 nodes to sync the geo rep status.
If I have some time , I can try to reproduce it if you could provide the gluster version, operating system and volume options.
Best Regards,
Strahil Nikolov
On Mon, Aug 19, 2024 at 4:45, Karl Kleinpaste<karl at kleinpaste.org> wrote: On 8/18/24 16:41, Strahil Nikolov wrote:
I don't see
2024 Sep 01
1
geo-rep will not initialize
FYI, I will be traveling for the next week, and may not see email much
until then.
Your questions...
On 8/31/24 04:59, Strahil Nikolov wrote:
> One silly question: Did you try adding some files on the source volume
> after the georep was created ?
Yes. I wondered that, too, whether geo-rep would not start simply
because there was nothing to do. But yes, there are a few files created
2008 Dec 09
1
Run rsync through intermediary server with SSH
I'm using rsync, ssh, and cron glued together with Python as a
push-based synchronization system. From a single location, I push
content out to various offices. I log stdout/stderr on the master
server to make sure everything is running smoothly.
I would now like for some of our "regional hubs" to take on some of the
load (bandwidth-wise), while still retaining my centralized
2017 Jul 29
1
Not possible to stop geo-rep after adding arbiter to replica 2
I managed to force stopping geo replication using the "force" parameter after the "stop" but there are still other issues related to the fact that my geo replication setup was created before I added the additional arbiter node to my replca.
For example when I would like to stop my volume I simply can't and I get the following error:
volume stop: myvolume: failed: Staging
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the
2024 Aug 22
1
geo-rep will not initialize
On 8/22/24 14:08, Strahil Nikolov wrote:
> I can try to reproduce it if you could provide the gluster version,
> operating system and volume options.
Most kind.
Fedora39,? Packages:
$ grep gluster /var/log/rpmpkgs
gluster-block-0.5-11.fc39.x86_64.rpm
glusterfs-11.1-1.fc39.x86_64.rpm
glusterfs-cli-11.1-1.fc39.x86_64.rpm
glusterfs-client-xlators-11.1-1.fc39.x86_64.rpm
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2018 Jan 22
1
geo-replication initial setup with existing data
2007 Jul 12
0
[LLVMdev] Atomic Operation and Synchronization Proposal v2
On Thursday 12 July 2007 13:08, Chandler Carruth wrote:
> > > Right. For example, the Cray X1 has a much richer set of memory
> > > ordering instructions than anything on the commodity micros:
> > >
> > > http://tinyurl.com/3agjjn
> > >
> > > The memory ordering intrinsics in the current llvm proposal can't take
> > > advantage
2007 Jul 12
2
[LLVMdev] Atomic Operation and Synchronization Proposal v2
On 7/12/07, Dan Gohman <djg at cray.com> wrote:
> On Thu, Jul 12, 2007 at 10:06:04AM -0500, David Greene wrote:
> > On Thursday 12 July 2007 07:23, Torvald Riegel wrote:
> >
> > > > The single instruction constraints can, at their most flexible, constrain
> > > > any set of possible pairings of loads from memory and stores to memory
> > >
>
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo
replication on.
The current volume is on 6 100tb bricks on 2 servers
My plan is:
1) copy each of the bricks to a new arrays on the servers locally
2) move the new arrays to the new servers
3) create the volume on the new servers using the arrays
4) fix the layout on the new volume
5) start georeplication (which should be
2017 Jun 10
1
AD Azure Connect
Hi everyone,
I like to connect azure ad to my samba4 installation via AD Azure Sync
(with password passthrough authentication).
Passthrough authentication and User synchronizations working with a Windows
2016 domain member server without any harm.
But group membership synchronizations fails always because AD Azure Sync
cannot parse the member attribute (reference-value-not-ldap-conformant).
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's like wack-a-mole.
any ideas?
[root at gluster-2 bricks]# glv status digitalcorpora
> Status
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
> bricks on the same server goes down in
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained.
Ah, this was what I suspected. Understood. I'll be happy with "shard".
Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client.
I looked into logs. I paste lengthy logs below with
2011 Jul 02
2
Associating a statefile with an image
Hi ,
I am trying to perform offline migration (i.e) Create an incremental image using the qcow format, transfer the vm memory state to a state fie.Use the image and statefile together as a template. Now create a new vm using the template. I can successfully do it using the following commands :
Save phase :
stop
migrate "exec:gzip -c > STATEFILE.gz"
qemu-img
qemu-img create -b
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi,
I've setup Gluster Geo Replication according the manual,
# sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave
config log-level DEBUG
#sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start
#sudo gluster volume geo-replication flvol
ssh://root at