search for: softlog

Displaying 20 results from an estimated 21 matches for "softlog".

2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
...let it resync the 3.2TB over the weekend. In which case what is the best way to replace the old failed node? the new node would have a new hostname and ip. Failed node is vna. Lets call the new node vnd I'm thinking the following: gluster volume remove-brick datastore4 replica 2 vna.proxmox.softlog:/tank/vmdata/datastore4 force gluster volume add-brick datastore4 replica 3 vnd.proxmox.softlog:/tank/vmdata/datastore4 Would that be all that is required? Existing setup below: gluster v info Volume Name: datastore4 Type: Replicate Volume ID: 0ba131ef-311d-4bb1-be46-5...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...had completed. Finished about 9 hours before I rebooted. > > So you had two good nodes (vnb and vng) working and you rebooted one > of them? Three good nodes - vnb, vng, vnh and one dead - vna from node vng: root at vng:~# gluster peer status Number of Peers: 3 Hostname: vna.proxmox.softlog Uuid: de673495-8cb2-4328-ba00-0419357c03d7 State: Peer in Cluster (Disconnected) Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 State: Peer in Cluster (Connected) Hostname: vnh.proxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected)...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...to be > specific) then you can actually remove the uuid entry from > /var/lib/glusterd from other nodes Is that just the file entry in "/var/lib/glusterd/peers" ? e.g I have: gluster peer status Number of Peers: 3 Hostname: vnh.proxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected) *Hostname: vna.proxmox.softlog** **Uuid: de673495-8cb2-4328-ba00-0419357c03d7** **State: Peer in Cluster (Disconnected)** * Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1...
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote: > > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or directly use replace-brick ? > Yes, this shoul...
2017 Sep 21
2
Performance drop from 3.8 to 3.10
...asing the op version I made no changes to the volume settings. op.version is 31004 gluster v info Volume Name: datastore4 Type: Replicate Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4 Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4 Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4 Options Reconfigured: transport.address-family: inet cluster.locking-scheme: granular cluster.granular-entry-heal: yes features.shard-b...
2017 Jun 13
1
How to remove dead peer, osrry urgent again :(
...quot;gluster peer detach <hostname> force right? > > > > Just to be sure I setup a test 3 node vm gluster cluster :) then shut down > one of the nodes and tried to remove it. > > > root at gh1:~# gluster peer status > Number of Peers: 2 > > Hostname: gh2.brian.softlog > Uuid: b59c32a5-eb10-4630-b147-890a98d0e51d > > State: Peer in Cluster (Connected) > > Hostname: gh3.brian.softlog > Uuid: 825afc5c-ead6-4c83-97a0-fbc9d8e19e62 > State: Peer in Cluster (Disconnected > > > root at gh1:~# gluster peer detach gh3 force > peer detach:...
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following: > > gluster volume remove-brick datastore4 replica 2 > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > gluster volume add-brick datastore4 replica 3 > vnd.proxmox.softlog:/tank/vmdata/datastore4 I think that should work perfectly fine yes, either that or directly use replace-brick ? -------------- next part -------------- A non-tex...
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote: > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or directly use replace-brick ? &...
2017 Jun 13
0
How to remove dead peer, osrry urgent again :(
...u at redhat.com> wrote: > We can also do "gluster peer detach <hostname> force right? Just to be sure I setup a test 3 node vm gluster cluster :) then shut down one of the nodes and tried to remove it. root at gh1:~# gluster peer status Number of Peers: 2 Hostname: gh2.brian.softlog Uuid: b59c32a5-eb10-4630-b147-890a98d0e51d State: Peer in Cluster (Connected) Hostname: gh3.brian.softlog Uuid: 825afc5c-ead6-4c83-97a0-fbc9d8e19e62 State: Peer in Cluster (Disconnected root at gh1:~# gluster peer detach gh3 force peer detach: failed: gh3 is not part of cluster -- Lindsay ---...
2017 Jun 12
3
How to remove dead peer, osrry urgent again :(
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > > On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson < > lindsay.mathieson at gmail.com> wrote: > >> On 11/06/2017 10:46 AM, WK wrote: >> > I thought you had removed vna as defective and then ADDED in vnh as >> > the replacement? >> > >> > Why is
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote: > Since my node died on friday I have a dead peer (vna) that needs to be > removed. > > > I had major issues this morning that I haven't resolve yet with all > VM's going offline when I rebooted a node which I *hope * was due to > quorum issues as I now have four peers in the cluster, one dead, three > live.
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote: >> > > Three good nodes - vnb, vng, vnh and one dead - vna > > from node vng: > > root at vng:~# gluster peer status > Number of Peers: 3 > > Hostname: vna.proxmox.softlog > Uuid: de673495-8cb2-4328-ba00-0419357c03d7 > State: Peer in Cluster (Disconnected) > > Hostname: vnb.proxmox.softlog > Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 > State: Peer in Cluster (Connected) > > Hostname: vnh.proxmox.softlog > Uuid: 9eb54c33-7f79-4a75-bc2b-67111...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...can actually remove the uuid entry from > /var/lib/glusterd from other nodes > > Is that just the file entry in "/var/lib/glusterd/peers" ? > > > e.g I have: > > gluster peer status > Number of Peers: 3 > > Hostname: vnh.proxmox.softlog > Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 > State: Peer in Cluster (Connected) > > *Hostname: vna.proxmox.softlog* > *Uuid: de673495-8cb2-4328-ba00-0419357c03d7* > *State: Peer in Cluster (Disconnected)* > > Hostname: vnb.proxmox.softlog > Uuid: 43a1bf8c-3e69-4581-8e16...
2017 Sep 22
0
Performance drop from 3.8 to 3.10
...; > op.version is 31004 > > gluster v info > > Volume Name: datastore4 > Type: Replicate > Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4 > Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4 > Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4 > Options Reconfigured: > transport.address-family: inet > cluster.locking-scheme: granular > cluster.granular-en...
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now have four peers in the cluster, one dead, three live. Confidence level is not high. -- Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote: > On 11/06/2017 10:46 AM, WK wrote: > > I thought you had removed vna as defective and then ADDED in vnh as > > the replacement? > > > > Why is vna still there? > > Because I *can't* remove it. It died, was unable to be brought up. The > gluster peer detach
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote: > I thought you had removed vna as defective and then ADDED in vnh as > the replacement? > > Why is vna still there? Because I *can't* remove it. It died, was unable to be brought up. The gluster peer detach command only works with live servers - A severe problem IMHO. -- Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > > > Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: > > Yes. And please ensure you do this after bringing down all the glusterd > instances and then once the peer file is removed from all the nodes restart > glusterd on all the
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: Yes. And please ensure you do this after bringing down all the glusterd instances and then once the peer file is removed from all the nodes restart glusterd on all the nodes one after another. If you have to bring down all gluster instances before file removal, you also bring down the whole gluster
2020 Mar 11
0
[PATCH -next 000/491] treewide: use fallthrough;
...E SENSOR DRIVER: Use fallthrough; BTTV VIDEO4LINUX DRIVER: Use fallthrough; CX88 VIDEO4LINUX DRIVER: Use fallthrough; MEDIA DRIVERS FOR DIGITAL DEVICES PCIE DEVICES: Use fallthrough; MOTION EYE VAIO PICTUREBOOK CAMERA DRIVER: Use fallthrough; SAA7134 VIDEO4LINUX DRIVER: Use fallthrough; SOFTLOGIC 6x10 MPEG CODEC: Use fallthrough; CODA V4L2 MEM2MEM DRIVER: Use fallthrough; SAMSUNG S5P/EXYNOS4 SOC SERIES CAMERA SUBSYSTEM DRIVERS: Use fallthrough; CAFE CMOS INTEGRATED CAMERA CONTROLLER DRIVER: Use fallthrough; OMAP IMAGING SUBSYSTEM (OMAP3 ISP and OMAP4 ISS): Use fallthrough;...