similar to: Upgrade OS on Server node

Displaying 20 results from an estimated 60000 matches similar to: "Upgrade OS on Server node"

2018 May 23
1
replicate a distributed volume
All, With Gluster 4.0.2, is it possible to take an existing distributed volume and turn it into a distributed-replicate by adding servers/bricks? It seems this should be possible, but I don't know that anything has been done to get it there. Brian Andrus
2018 May 23
0
gluster volume create failed: Host is not in 'Peer in Cluster' state
All, Running glusterfs-4.0.2-1 on CentOS 7.5.1804 I have 10 servers running in a pool. All show as connected when I do gluster peer status and gluster pool list. There is 1 volume running that is distributed on servers 1-5. I try using a brick in server7 and it always gives me: /volume create: GDATA: failed: Host server7 is not in 'Peer in Cluster' state/ Now that is even ON server7
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2018 Jan 26
0
Replacing a third data node with an arbiter one
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote: > On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.co > m> wrote: > > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > > > Hello, > > > > The subject says it all. I have a replica 3 cluster : > > > > gluster> volume info thedude > >
2017 Jun 01
1
Restore a node in a replicating Gluster setup after data loss
Hi We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1, Server2 and Server3 where Server3 is the Arbiter node. There are several Gluster volumes ontop of that setup. They all look a bit like this: gluster volume info gv-tier1-vm-01 [...] Number of Bricks: 1 x (2 + 1) = 3 [...] Bricks: Brick1: Server1:/var/data/lv-vm-01 Brick2: Server2:/var/data/lv-vm-01 Brick3:
2018 Jan 19
0
Segfaults after upgrade to GlusterFS 3.10.9
Hi Frank, It will be very easy to debug if u have core file with u. It looks like crash is coming from gfapi stack. If there is core file can u please share bt of the core file. Regards, Jiffin On Thursday 18 January 2018 11:18 PM, Frank Wall wrote: > Hi, > > after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time: > > [12407.918249] ganesha.nfsd[38104]:
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2018 Jan 29
0
Replacing a third data node with an arbiter one
On 01/29/2018 08:56 PM, Hoggins! wrote: > Thank you, for that, however I have a problem. > > Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: >> Yes, you would need to reduce it to replica 2 and then convert it to >> arbiter. >> 1. Ensure there are no pending heals, i.e. heal info shows zero entries. >> 2. gluster volume remove-brick thedude replica 2 >>
2023 Sep 29
0
gluster volume status shows -> Online "N" after node reboot.
Hi list, I am using a replica volume (3 nodes) gluster in an ovirt environment and after setting one node in maintenance mode and rebooting it, the "Online" flag in gluster volume status does not go to "Y" again. [root at node1 glusterfs]# gluster volume status Status of volume: my_volume Gluster process TCP Port RDMA Port Online Pid
2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e >
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through. ---------- Forwarded message ---------- From: Martin Toth <snowmailer at gmail.com> Date: Thu, Sep 21, 2017 at 9:17 AM Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] To: gluster-users at gluster.org Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com Hello all fellow GlusterFriends, I would like you to comment /
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi, I am on centos 7.4 with gluster 4. I am trying to a distributed and replicated volume on the 4 nodes I am getting this un-expected qualification, [root at glustep1 brick1]# gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0 8><---- Replica 2 volumes are prone to split-brain. Use
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
Hi, after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time: [12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 00007f872425fb00 sp 00007f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000] [12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 00007f716d8f5b00 sp 00007f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000] [14531.582667]
2018 Jan 26
0
Replacing a third data node with an arbiter one
On 01/24/2018 07:20 PM, Hoggins! wrote: > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 >
2023 Jan 30
1
change OS on one node
Hi to all, I would like to replace the operating system on one gluster brick. Is it possible to hold the data on the brick? If yes, how can I connect the data partition back to the new brick. I will remove the brick from the volume and remove the peer from the pool first. Then setup the new OS then I would like to configure the new brick and bind the exixting partion to the new brick. Or
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem. Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: > Yes, you would need to reduce it to replica 2 and then convert it to > arbiter. > 1. Ensure there are no pending heals, i.e. heal info shows zero entries. > 2. gluster volume remove-brick thedude replica 2 > ngluster-3.network.hoggins.fr:/export/brick/thedude force > 3. gluster volume
2017 Dec 11
0
How large the Arbiter node?
Hi, there is good suggestion here : http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing <http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing> Since the arbiter brick does not store file data, its disk usage will be considerably less than the other bricks of the replica. The sizing of
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based) - gluster 3.8.12 - Replica 3 - VM Hosting Only - Sharded Storage Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on 2 nodes with no data loss - gluster for teh win! But its looking like I might have to replace the
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi, When I upgraded my cluster, df started returning some odd numbers for my legacy volumes. Newly created volumes after the upgrade, df works just fine. I have been researching since Monday and have not found any reference to this symptom. "vm-images" is the old legacy volume, "test" is the new one. [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2017 Jun 01
0
Disconnected gluster node things it is still connected...
Hi all, Trying to do some availability testing. We have three nodes: node1, node2, node3. Volumes are all replica 2, across all three nodes. As a test we disconnected node1, buy removing the vlan tag for that host on the switch it is connected to. As a result node2 and node3 now show node1 in disconnected status, and show the volumes as degraded. This is ecpected. However logging in to node1