Lindsay Mathieson
2017-Jun-09 00:51 UTC
[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based) - gluster 3.8.12 - Replica 3 - VM Hosting Only - Sharded Storage Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on 2 nodes with no data loss - gluster for teh win! But its looking like I might have to replace the node with a new server in which case I won't try anything fancy with trying reuse the existing data on the failed node disks, I'd rather let it resync the 3.2TB over the weekend. In which case what is the best way to replace the old failed node? the new node would have a new hostname and ip. Failed node is vna. Lets call the new node vnd I'm thinking the following: gluster volume remove-brick datastore4 replica 2 vna.proxmox.softlog:/tank/vmdata/datastore4 force gluster volume add-brick datastore4 replica 3 vnd.proxmox.softlog:/tank/vmdata/datastore4 Would that be all that is required? Existing setup below: gluster v info Volume Name: datastore4 Type: Replicate Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4 Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4 Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4 Options Reconfigured: cluster.locking-scheme: granular cluster.granular-entry-heal: yes features.shard-block-size: 64MB network.remote-dio: enable cluster.eager-lock: enable performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.stat-prefetch: on performance.strict-write-ordering: off nfs.enable-ino32: off nfs.addr-namelookup: off nfs.disable: on cluster.server-quorum-type: server cluster.quorum-type: auto features.shard: on cluster.data-self-heal: on performance.readdir-ahead: on performance.low-prio-threads: 32 user.cifs: off performance.flush-behind: on -- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170609/ff527a10/attachment.html>
Lindsay Mathieson
2017-Jun-09 02:24 UTC
[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9 June 2017 at 10:51, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:> Or I should say we *had* a 3 node cluster, one node died today.Boot SSD failed, definitely a reinstall from scratch. And a big thanks (*not*) to the smart reporting which showed no issues at all. -- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170609/4b255f79/attachment.html>
lemonnierk at ulrar.net
2017-Jun-09 07:11 UTC
[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following: > > gluster volume remove-brick datastore4 replica 2 > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > gluster volume add-brick datastore4 replica 3 > vnd.proxmox.softlog:/tank/vmdata/datastore4I think that should work perfectly fine yes, either that or directly use replace-brick ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170609/822d973e/attachment.sig>
lemonnierk at ulrar.net
2017-Jun-09 07:12 UTC
[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12
> And a big thanks (*not*) to the smart reporting which showed no issues at > all.Heh, on that, did you think to take a look at the Media_Wearout indicator ? I recently learned that existed, and it explained A LOT. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170609/08a04ef5/attachment.sig>
Pranith Kumar Karampuri
2017-Jun-09 11:56 UTC
[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote:> > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or directly use replace-brick ? >Yes, this should be replace-brick> > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170609/32e08a19/attachment.html>