similar to: Change IP address of few nodes in GFS 3.8

Displaying 20 results from an estimated 2000 matches similar to: "Change IP address of few nodes in GFS 3.8"

2017 Nov 03
0
Change IP address of few nodes in GFS 3.8
Thanks Atin. Peer probes were done using FQDN and I was able to make these changes. The only thing I had to do on rest of the nodes was to flush nscd; after that everything was good and I did not have to restart gluster services on these rest of the nodes. - Hemant On 10/30/17 11:46 AM, Atin Mukherjee wrote: If the gluster nodes are peer probed through FQDNs then you?re good. If they?re done
2017 Sep 12
2
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X. I have already tried it in offline upgrade mode and that works, I am interested in knowing if this upgrade of gluster server version can be in online upgrade mode. Many thanks in advance. -- - Hemant Mamtora
2017 Sep 13
1
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
Thanks for your reply. So the way I understand is that it can be upgraded but with a downtime, which means that there are no clients writing to this gluster volume, as the volume is stopped. But post a upgrade we will still have the data on the gluster volume that we had before the upgrade(but with downtime). - Hemant On 9/13/17 2:33 PM, Diego Remolina wrote: > Nope, not gonna work... I
2017 Sep 13
0
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
Nope, not gonna work... I could never go even from 3.6. to 3.7 without downtime cause of the settings change, see: http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html Even when changing options in the older 3.6.x I had installed, my new 3.7.x server would not connect, so had to pretty much stop gluster in all servers, update to 3.7.x offline, then start gluster and
2017 Oct 30
0
Change IP address of few nodes in GFS 3.8
If the gluster nodes are peer probed through FQDNs then you?re good. If they?re done through IPs then for every node you?d need to replace the old IP with new IP for all the files in /var/lib/glusterd along with renaming the filenames with have the associated old IP and restart all gluster services. I used to have a script for this which I shared earlier in users ML, need to dig through my mailbox
2018 Apr 25
1
RMAN backups on Glusters
Sending again. Can somebody please take a look and let me know is this is doable? Folks, We have glusters with version 3.8.13 and we are using that for RMAN backups. We get errors/warnings, RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43 ORA-19510: failed to set size of 184820 blocks for file
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above? I get plenty of these on all three peers. hi guys I've recently upgraded from 3.8 to 3.10 and I'm seeing weird behavior. I see: $gluster vol status $_vol detail; takes long timeand mostly times out. I do: $ gluster vol heal $_vol info and I see: Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA Status: Transport endpoint is not connected Number
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
This means shd client is not able to establish the connection with the brick on port 49155. Now this could happen if glusterd has ended up providing a stale port back which is not what brick is listening to. If you had killed any brick process using sigkill signal instead of sigterm this is expected as portmap_signout is not received by glusterd in the former case and the old portmap entry is
2018 Jan 21
2
BUG: After stop and start wrong port is advertised
For what it's worth, I just updated some CentOS 7 servers from GlusterFS 3.12.1 to 3.12.4 and hit this bug. Did the patch make it into 3.12.4? I had to use Mike Hulsman's script to check the daemon port against the port in the volume's brick info, update the port, and restart glusterd on each node. Luckily I only have four servers! Hoping I don't have to do this every time I
2018 Jan 22
2
BUG: After stop and start wrong port is advertised
Ouch! Yes, I see two port-related fixes in the GlusterFS 3.12.3 release notes[0][1][2]. I've attached a tarball of all yesterday's logs from /var/log/glusterd on one the affected nodes (called "wingu3"). I hope that's what you need. [0] https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.3.md [1] https://bugzilla.redhat.com/show_bug.cgi?id=1507747
2017 Aug 01
2
"other names" - how to clean/get rid of ?
hi how to get rid of entries in "Other names" ? thanks L.
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin, I've gotten around to this and was able to get upgrade done using 3.7.0 before moving to 3.11. For some reason 3.7.9 wasn't working well. On 3.11 though I notice that gluster/nfs is really made optional and nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha on new clusters but would like to have glusterfs-gnfs on existing clusters so a seamless upgrade
2018 Jan 23
2
BUG: After stop and start wrong port is advertised
Hello, ? ? Will we also suffer from this regression in any of the (previously) fixed 3.10 releases? We kept 3.10 and hope to stay stable :/ Regards Jo ? ? -----Original message----- From:Atin Mukherjee <amukherj at redhat.com> Sent:Tue 23-01-2018 05:15 Subject:Re: [Gluster-users] BUG: After stop and start wrong port is advertised To:Alan Orth <alan.orth at gmail.com>; CC:Jo
2017 Aug 23
2
Glusterd proccess hangs on reboot
Not yet. Gaurav will be taking a look at it tomorrow. On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Atin, > > Do you have time to check the logs? > > On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com> > wrote: > > Same thing happens with 3.12.rc0. This time perf top shows hanging in > >
2017 Oct 30
2
BUG: After stop and start wrong port is advertised
On Sat, 28 Oct 2017 at 02:36, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hello Atin, > > > > > > I just read it and very happy you found the issue. We really hope this > will be fixed in the next 3.10.7 version! > 3.10.7 - no I guess as the patch is still in review and 3.10.7 is getting tagged today. You?ll get this fix in 3.10.8. > > >
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote: > I thought you had removed vna as defective and then ADDED in vnh as > the replacement? > > Why is vna still there? Because I *can't* remove it. It died, was unable to be brought up. The gluster peer detach command only works with live servers - A severe problem IMHO. -- Lindsay Mathieson
2017 Dec 02
1
BUG: After stop and start wrong port is advertised
On Sat, 2 Dec 2017 at 19:29, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hello Atin, > > > > > > Could you confirm this should have been fixed in 3.10.8? If so we'll test > it for sure! > Fix should be part of 3.10.8 which is awaiting release announcement. > > Regards > > Jo > > > > > > > -----Original
2017 Jun 09
1
substitution of two faulty servers
thanks for reply, but the problem is the number of replications, I have replication factor 3 so what would be the best way to substitute 2 bricks? as I understood properly first I need to add the server to gluster as a node, afterwards I need to add bricks to the volume (but if the number of bricks is not a factor of 3 it is not possible), or, if there is other way please drop me a hint or
2018 Jan 22
0
BUG: After stop and start wrong port is advertised
The patch was definitely there in 3.12.3. Do you have the glusterd and brick logs handy with you when this happened? On Sun, Jan 21, 2018 at 10:21 PM, Alan Orth <alan.orth at gmail.com> wrote: > For what it's worth, I just updated some CentOS 7 servers from GlusterFS > 3.12.1 to 3.12.4 and hit this bug. Did the patch make it into 3.12.4? I had > to use Mike Hulsman's