Displaying 20 results from an estimated 100 matches similar to: "Graceful shutdown doesn't stop all Gluster processes"
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem
Best Regards,
Strahil Nikolov
? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????:
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All,
I have run the following commands on master3, and that has added master3 to geo-replication.
gluster system:: execute gsec_create
gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root.
Best Regards,
Strahil Nikolov
? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi All,
I have run the following commands on master3,
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2014 Apr 10
1
replication + attachment sis + zlib bug ? (HEAD version from xi.rename-it.nl)
Hi,
i have setup with mail_attachment single instance store + replication +
zlib and got this bug when i try to replicate one test mailbox:
On master1 in mail.log:
Apr 10 13:25:22 master1 dovecot:
dsync-local(zzz at blabla666.sk): Error:
read(/nfsmnt/mailnfs1/attachments1/6b/57/6b57ad34cf6c414662233d833a7801fde4e1cdcb-92b5052558774653a728000013e2b982[base64:18
b/l]) failed: Stream is larger than
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone,
We are mounting this external Gluster volume (dc.local:/docker_config) for docker configuration on one of the Gluster servers. When I ran the stop-all-gluster-processes.sh script, I wanted to stop all gluster server-related processes on the server, but not to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the
2012 Dec 17
1
multiple puppet masters
Hi,
I would like to set up an additional puppet master but have the CA server
handled by only 1 puppet master. I have set this up as per the
documentation here:
http://docs.puppetlabs.com/guides/scaling_multiple_masters.html
I have configured my second puppet master as follows:
[main]
...
ca = false
ca_server = puppet-master1.test.net
I am using passenger so I am a bit confused how the
2017 Aug 10
2
Keys used to sign releases
I see that some, but not all, releases provide a local link to the key used
to generate the signature files, which makes it difficult for a script to
use them to verify the signatures.
Gcc solves this problem by including the following on their mirrors page (
https://gcc.gnu.org/mirrors.html):
The archives there will be signed by one of the following GnuPG keys:
- 1024D/745C015A 1999-11-09
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts.
________________________________
From: Aravinda <aravinda at kadalu.tech>
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
No. If the script is used to update the GlusterFS packages in the node, then we need to stop the client processes as well (Fuse client is `glusterfs` process. `ps ax | grep glusterfs`).
The default behaviour can't be changed, but the script can be enhanced by adding a new option `--skip-clients` so that it can skip stopping the client processes.
--
Aravinda
Kadalu Technologies
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant,
Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ?
Best Regards,Strahil Nikolov
On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2012 May 04
2
btrfs scrub BUG: unable to handle kernel NULL pointer dereference
I think I have some failing hard drives, they are disconnected for now.
stan {~} root# btrfs filesystem show
Label: none uuid: d71404d4-468e-47d5-8f06-3b65fa7776aa
Total devices 2 FS bytes used 6.27GB
devid 1 size 9.31GB used 8.16GB path /dev/sde6
*** Some devices missing
Label: none uuid: b142f575-df1c-4a57-8846-a43b979e2e09
Total devices 8 FS bytes used
2007 Jun 30
2
import data
Hello!
I wonder if you might help me with informations about how to import data with a 2.4.1 R version without the menu "Import data".
Best regards.
Eric Duplex ZOUKEKANG
Ingénieur Zootechnicien
Montpellier SupAgro
Master2 AAA-PARC
tel : +33(0)661432340
zoukekan@supagro.inra.fr
___________________________________________________________________________
[[alternative
2024 Feb 09
1
Graceful shutdown doesn't stop all Gluster processes
I think the service that shutdowns the bricks on EL systems is something like this - right now I don't have access to my systems to check but you can extract the rpms and see it:
https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4
Best Regards,Strahil Nikolov
On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts<ronny.adsetts at amazinginternet.com> wrote: ________
Community Meeting
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container.
The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind.
Thanks,
Anant
________________________________
2015 Aug 06
3
question on autch cache parameters
hi timo,
I checked out the commit causing this.
its this one:
http://hg.dovecot.org/dovecot-2.2/diff/5e445c659f89/src/auth/auth-request.c#l1.32
if I move this block back as it was. everything is fine
diff -r a46620d6e0ff -r 5e445c659f89 src/auth/auth-request.c
--- a/src/auth/auth-request.c Tue May 05 13:35:52 2015 +0300
+++ b/src/auth/auth-request.c Tue May 05 14:16:31 2015 +0300
@@ -618,30
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well,
you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ?
Best Regards,
Strahil Nikolov
? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hello Everyone,
We have a replicated Gluster volume with three nodes, and we face a strange issue whenever we need to restart one of the nodes in this cluster.
As per my understanding, if we shut down one node, the Gluster mount should smoothly connect to another remaining Gluster server and shouldn't create any issues.
In our setup, when we stop Gluster on any of the nodes, we mostly get
2010 Feb 25
2
decentral vpn with 1 gateway host
Hello tinc users,
i have the follow configuration:
1 client/server called master, it is always reachable from internet
(with dyndns)
5 clients, that connects to the master and the other clients (all behind
a router (NAT))
master-hosts-file:
Address = ...
Port = ...
Subnet = ...
Compression = 0
---- key -----
client-hosts-files:
Compression = 0
Subnet = ...
----- key -----
tinc.conf
Name = ....
2015 Feb 26
2
C7, igb and DCB support for pause frame ?
Hi there,
I?m working on deploying our new cluster.
Masters have 5?1gbps (i210 and i350, thus using igb.ko), configured
with mtu 9000, 802.3ad. Works fine *but* I can?t get DCB working
(pause frame, aka flow control, which is supported by and enabled on
our switches).
[root at master2 ~]# dcbtool gc eno1 dcb
Command: Get Config
Feature: DCB State
Port: eno1
Status: