similar to: How to shutdown a node properly ?

Displaying 20 results from an estimated 3000 matches similar to: "How to shutdown a node properly ?"

2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown? Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the volumes > for 42 seconds (network.ping-timeout). Is there a special way to
2017 Jun 29
0
How to shutdown a node properly ?
On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the > volumes for 42 seconds (network.ping-timeout). Is there a special way > to shutdown a node to keep the access to the volumes without > interruption ? Currently, I use the ?shutdown? or ?reboot? command. > `killall glusterfs glusterfsd glusterd`
2017 Jun 29
0
How to shutdown a node properly ?
On my nodes, when i use the system.d script to kill gluster (service glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn?t kill everything ! De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com] Envoy? : 29 juin 2017 13:41 ? : Ravishankar N <ravishankar at redhat.com> Cc : gluster-users at gluster.org; Renaud Fortier <Renaud.Fortier at
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the system.d script to kill gluster (service > glusterfs-server stop) only glusterd is killed. Then I guess the > shutdown doesn?t kill everything ! > Killing glusterd does not kill other gluster processes. When you shutdown a node, everything obviously gets killed but the client does not get notified
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown don't? I think this is a bug, if I'm shutting down a server, it's obvious that all clients should stop to connect to it.... Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
I checked, and I have it like this: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-nfs" # # The gluster server from which to mount the shared data volume. HA_VOL_SERVER="tlxdmz-nfs1" # # N.B. you may use short names or long names; you may not use IP addrs. # Once you select one, stay with it as it will be mildly unpleasant to # clean up
2017 Sep 02
0
ganesha error ?
On 09/02/2017 02:09 AM, Renaud Fortier wrote: > Hi, > > I got these errors 3 times since I?m testing gluster with nfs-ganesha. > The clients are php apps and when this happen, clients got strange php > session error. Below, the first error only happen once but other errors > happen every time a clients try to create a new session file. To make > php apps work again, I had
2017 Jun 22
1
Volume options appear twice
Hi, This is a list of volume options that appear twice when I run : gluster volume get my_volume all features.grace-timeout features.lock-heal geo-replication.ignore-pid-check geo-replication.indexing network.ping-timeout network.tcp-window-size performance.cache-size Is that normal ? Thanks Gluster version : 3.8.11 on Debian 8 -------------- next part -------------- An HTML attachment was
2017 Sep 01
2
ganesha error ?
Hi, I got these errors 3 times since I'm testing gluster with nfs-ganesha. The clients are php apps and when this happen, clients got strange php session error. Below, the first error only happen once but other errors happen every time a clients try to create a new session file. To make php apps work again, I had to restart the client. Do you have an idea of what's happening here ?
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi, In your ganesha-ha.conf do you have your virtual ip adresses set something like this?: VIP_tlxdmz-nfs1="192.168.22.33" VIP_tlxdmz-nfs2="192.168.22.34" Renaud De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo Envoy??: 20 d?cembre 2017 04:35 ??: gluster-users at gluster.org Objet?: [Gluster-users]
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi, I've just created again the gluster with NFS ganesha. Glusterfs version 3.8 When I run the command gluster nfs-ganesha enable - it returns a success. However, looking at the pcs status, I see this: [root at tlxdmz-nfs1 ~]# pcs status Cluster name: ganesha-nfs Stack: corosync Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum Last updated: Wed Dec 20
2017 Jul 07
1
Ganesha "Failed to create client in recovery dir" in logs
Hi all, I have this entry in ganesha.log file on server when mounting the volume on client : < GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir (/var/lib/nfs/ganesha/v4recov/node0/::ffff:192.168.2.152-(24:Linux NFSv4.2 client-host-name)), errno=2 > But everything seems to work as expected without any other errors (so far).
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glusterd start > * yum versionlock add glusterfs* > *
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Aug 17
1
shared-storage bricks
Hi, I enabled shared storage on my four nodes cluster but when I look at the volume info, I only have 3 bricks. Is that suppose to be normal ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170817/9e94d1ac/attachment.html>
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time