similar to: Geo-replication status is getting Faulty after few seconds

Displaying 20 results from an estimated 400 matches similar to: "Geo-replication status is getting Faulty after few seconds"

2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All, I have run the following commands on master3, and that has added master3 to geo-replication. gluster system:: execute gsec_create gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root. Best Regards, Strahil Nikolov ? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi All, I have run the following commands on master3,
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner. Is there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service
2014 Apr 10
1
replication + attachment sis + zlib bug ? (HEAD version from xi.rename-it.nl)
Hi, i have setup with mail_attachment single instance store + replication + zlib and got this bug when i try to replicate one test mailbox: On master1 in mail.log: Apr 10 13:25:22 master1 dovecot: dsync-local(zzz at blabla666.sk): Error: read(/nfsmnt/mailnfs1/attachments1/6b/57/6b57ad34cf6c414662233d833a7801fde4e1cdcb-92b5052558774653a728000013e2b982[base64:18 b/l]) failed: Stream is larger than
2012 Dec 17
1
multiple puppet masters
Hi, I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hello Everyone, We have a replicated Gluster volume with three nodes, and we face a strange issue whenever we need to restart one of the nodes in this cluster. As per my understanding, if we shut down one node, the Gluster mount should smoothly connect to another remaining Gluster server and shouldn't create any issues. In our setup, when we stop Gluster on any of the nodes, we mostly get
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hi, normally, when we shut down or reboot one of the (server) nodes, we call the "stop-all-gluster-processes.sh" script. But i think you did that, right? Best regards, Hubert Am Mo., 5. Feb. 2024 um 13:35 Uhr schrieb Anant Saraswat < anant.saraswat at techblue.co.uk>: > Hello Everyone, > > We have a replicated Gluster volume with three nodes, and we face a >
2012 May 25
3
Is it possible to set up multi-level puppet nodes?
Hi, I am new in puppet, and I just wonder whether it is possible to create multiple levels of puppet masters. Can puppet work this way? First-level(master): root-master Second-level(masters): master1, master2 Third-level nodes(as agents): agent1, agent2, agent3, agent4 All master nodes in the second-level are agents of root-master, and each of third-level
2010 Feb 25
2
decentral vpn with 1 gateway host
Hello tinc users, i have the follow configuration: 1 client/server called master, it is always reachable from internet (with dyndns) 5 clients, that connects to the master and the other clients (all behind a router (NAT)) master-hosts-file: Address = ... Port = ... Subnet = ... Compression = 0 ---- key ----- client-hosts-files: Compression = 0 Subnet = ... ----- key ----- tinc.conf Name = ....
2009 Apr 01
1
Help with mixed-effects model with temporal pseudoreplication!
Sorry if this is the wrong ml for this question, I am new to R. I am trying to use R to analyze the data from my thesis experiment and I am having troubles accounting for the pseudoreplication properly from having each participant repeat each treatment combination (combination of fixed factors) 5 times. The design of the experiment is as follows... Responses: CompletionTIme VisitedTargets
2010 Feb 18
3
NFS client firewall config?
Hi all, Which ports do I need to have open on an NFS client's firewall to allow it to connect to a remote NFS servers? When I disable iptables (using ConfigServerFirewall), it connects fine, but as soon as I enable it, NFS gives me this error: root at saturn:[~]$ mount master1.mydomain.co.za:/saturn /bck mount: mount to NFS server 'master1.mydomain.co.za' failed: RPC Error: Unable to
2017 Nov 13
0
Help with reconnecting a faulty brick
Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: > > Could I just remove the content of the brick (including the .glusterfs > directory) and reconnect ? > In fact, what would be the difference between reconnecting the brick with a wiped FS, and using gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore gluster volume add-brick myvol replica 2
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: > > Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >> >> Could I just remove the content of the brick (including the >> .glusterfs directory) and reconnect ? >> > > In fact, what would be the difference between reconnecting the brick > with a wiped FS, and using > > gluster volume remove-brick vmstore
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, We are mounting this external Gluster volume (dc.local:/docker_config) for docker configuration on one of the Gluster servers. When I ran the stop-all-gluster-processes.sh script, I wanted to stop all gluster server-related processes on the server, but not to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the
2015 Aug 06
3
question on autch cache parameters
hi timo, I checked out the commit causing this. its this one: http://hg.dovecot.org/dovecot-2.2/diff/5e445c659f89/src/auth/auth-request.c#l1.32 if I move this block back as it was. everything is fine diff -r a46620d6e0ff -r 5e445c659f89 src/auth/auth-request.c --- a/src/auth/auth-request.c Tue May 05 13:35:52 2015 +0300 +++ b/src/auth/auth-request.c Tue May 05 14:16:31 2015 +0300 @@ -618,30
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
No. If the script is used to update the GlusterFS packages in the node, then we need to stop the client processes as well (Fuse client is `glusterfs` process. `ps ax | grep glusterfs`). The default behaviour can't be changed, but the script can be enhanced by adding a new option `--skip-clients` so that it can skip stopping the client processes. -- Aravinda Kadalu Technologies
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts. ________________________________ From: Aravinda <aravinda at kadalu.tech> Sent: 16 February 2024 12:36 PM To: Anant Saraswat
2002 Nov 11
11
Speed problem
Mermgfurt ! I have some problem with syncing two machines which are connected over a Gigabit-connection. I'm trying to use rsync with ssh because of the authorisation mechanisms (keys). It starts quite ok with 18 MB/s (this small speed may have something to do with our internal net) and falls down to 400 KB/s (!!!). This happens over a long period because those files I want to copy are very
2015 Jun 22
2
Duplicate mails with pop3 + dsync replication
It turns out that if I enable this option: pop3_deleted_flag = "$POP3Deleted? The issue no longer persists. I have to manually expunge the kept mails that have been deleted via pop3 though: doveadm expunge mailbox INBOX KEYWORD '$POP3Deleted' -A Wolfgang > On 21 Jun 2015, at 21:05, Wolfgang Hennerbichler <wogri at wogri.com> wrote: > > FWIW I just tried the sdbox