Displaying 20 results from an estimated 900 matches similar to: "Challenges with Replicated Gluster volume after stopping Gluster on any node."
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hi,
normally, when we shut down or reboot one of the (server) nodes, we call
the "stop-all-gluster-processes.sh" script. But i think you did that, right?
Best regards,
Hubert
Am Mo., 5. Feb. 2024 um 13:35 Uhr schrieb Anant Saraswat <
anant.saraswat at techblue.co.uk>:
> Hello Everyone,
>
> We have a replicated Gluster volume with three nodes, and we face a
>
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well,
you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ?
Best Regards,
Strahil Nikolov
? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container.
The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind.
Thanks,
Anant
________________________________
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant,
Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ?
Best Regards,Strahil Nikolov
On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts.
________________________________
From: Aravinda <aravinda at kadalu.tech>
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat
2014 Jan 02
1
How to remove Dovecot (LMTP) information from Email header
Hello All,
I want to remove Dovecot (LMTP) information from Email Header, Please
help me. I am using Dovecot 2.0.9 with Exim.
Received: from XX.XXblue.co.uk
by XX.XXblue.co.uk*(Dovecot) with LMTP id* XIuTJkJFxVLKTwAAG2fxGQ
for <anant.saraswat at techblue.co.uk>; Thu, 02 Jan 2014 10:59:28 +0000
Received: from [210.7.64.2] (helo=[192.168.100.71])
by solo.techblue.co.uk with esmtp (Exim
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem
Best Regards,
Strahil Nikolov
? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????:
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root.
Best Regards,
Strahil Nikolov
? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi All,
I have run the following commands on master3,
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
No. If the script is used to update the GlusterFS packages in the node, then we need to stop the client processes as well (Fuse client is `glusterfs` process. `ps ax | grep glusterfs`).
The default behaviour can't be changed, but the script can be enhanced by adding a new option `--skip-clients` so that it can skip stopping the client processes.
--
Aravinda
Kadalu Technologies
2024 Feb 26
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
In our setup, the Gluster brick comes from an iSCSI SAN storage and is then used as a brick on the Gluster server. To extend the brick, we stop the Gluster server, extend the logical volume (LV) on the SAN server, resize it on the host, mount the brick with the extended size, and finally start the Gluster server.
Please let me know if this process can be optimized, I will be happy to
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All,
I have run the following commands on master3, and that has added master3 to geo-replication.
gluster system:: execute gsec_create
gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the
2014 Mar 17
2
Want to create custom iso
Hello All,
I want to make custom iso of Centos 6.4 and want some feature in it by
default
Install VLC , Forticlient , Teamviwer , Google-Chrome , Google-Drive ,
and I want to set a Wallpaper for my new OS .
and I have to install it on a computer without internet connection. So i
have to give the setup of given Software in the iso itself.
How can i achive my goal. Any help is appreciated.
2014 Jan 27
1
I am unable to find my windows share in ~/.gvfs
Hello All,
I am facing a strange issue, I use eclipse and I was using Ubuntu
earlier, Now I am trying with Centos. As in my Ubuntu i Can find my
share in ~/.gvfs.
but now I am unable to locate the same in Centos(release 6.5 Final). I
have also checked /var/run/ and I dont have /run in my system
So can some one please help me how can i get it in Centos.
Actually I can access shares by using
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone,
I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner.
Is there any specific sequence or trick I need to follow? Currently, I am using the following command:
[root at master2 ~]# systemctl stop glusterd.service
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone,
We are mounting this external Gluster volume (dc.local:/docker_config) for docker configuration on one of the Gluster servers. When I ran the stop-all-gluster-processes.sh script, I wanted to stop all gluster server-related processes on the server, but not to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2024 Feb 24
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant,
why would you need to shutdown a brick to expand it ? This is an online operation.
Best Regards,
Strahil Nikolov
2014 Feb 05
1
How to archive mails on different server
Hi Guys,
I want to make some new changes to my exim mail server So if any user
archive his/her mail they will save on any other server not on Exim
server , and whenever user want to search for any old mail he will go to
archive folder and can search it from there.... So basically i want to
setup a diffrent server for archiving and want to connect it to my Exim
server. So whevever user
2023 Feb 24
1
Big problems after update to 9.6
Hi David,
It seems like a network issue to me, As it's unable to connect the other node and getting timeout.
Few things you can check-
* Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node.
* Are you binding gluster on any specific IP, which is changed after your update.
* Check if you can access port 24007 from the other host.
If
2008 Jan 23
1
[LLVMdev] Walking all the predecessors for a basic block
Hi,
Well, yes i did try your suggestion but i keep on running into a
compilation problem.
The error is:
llvm[0]: Compiling Hello.cpp for Release build (PIC)
/home/saraswat/llvm/llvm-2.1/include/llvm/ADT/GraphTraits.h: In
instantiation of
`llvm::GraphTraits<llvm::ilist_iterator<llvm::BasicBlock> >':
Hello.cpp:59: instantiated from here