Displaying 14 results from an estimated 14 matches for "vnh".
Did you mean:
  anh
  
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...vna with vnd but it  is probably not fully healed yet cuz 
> you had 3.8T worth of chunks to copy.
No, the heal had completed. Finished about 9 hours before I rebooted.
>
> So you had two good nodes (vnb and vng) working and you rebooted one 
> of them? 
Three good nodes - vnb, vng, vnh and one dead - vna
from node vng:
root at vng:~# gluster peer status
Number of Peers: 3
Hostname: vna.proxmox.softlog
Uuid: de673495-8cb2-4328-ba00-0419357c03d7
State: Peer in Cluster (Disconnected)
Hostname: vnb.proxmox.softlog
Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36
State: Peer in Cluster...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote:
>>
>
> Three good nodes - vnb, vng, vnh and one dead - vna
>
> from node vng:
>
> root at vng:~# gluster peer status
> Number of Peers: 3
>
> Hostname: vna.proxmox.softlog
> Uuid: de673495-8cb2-4328-ba00-0419357c03d7
> State: Peer in Cluster (Disconnected)
>
> Hostname: vnb.proxmox.softlog
> Uuid: 43a1...
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as 
> the replacement?
>
> Why is vna still there? 
Because I *can't* remove it. It died, was unable to be brought up. The 
gluster peer detach command only works with live servers - A severe 
problem IMHO.
-- 
Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command only works with live servers - A severe
> problem IMHO.
If the dead server doesn't host any volume...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote:
> Since my node died on friday I have a dead peer (vna) that needs to be 
> removed.
>
>
> I had major issues this morning that I haven't resolve yet with all 
> VM's going offline when I rebooted a node which I *hope * was due to 
> quorum issues as I now have four peers in the cluster, one dead, three 
> live.
>
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...9;t host any volumes (bricks of volumes to be 
> specific) then you can actually remove the uuid entry from 
> /var/lib/glusterd from other nodes 
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
    gluster peer status
    Number of Peers: 3
    Hostname: vnh.proxmox.softlog
    Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
    State: Peer in Cluster (Connected)
    *Hostname: vna.proxmox.softlog**
    **Uuid: de673495-8cb2-4328-ba00-0419357c03d7**
    **State: Peer in Cluster (Disconnected)**
    *
    Hostname: vnb.proxmox.softlog
    Uuid: 43a1bf8c-3e6...
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be 
removed.
I had major issues this morning that I haven't resolve yet with all VM's 
going offline when I rebooted a node which I *hope * was due to quorum 
issues as I now have four peers in the cluster, one dead, three live.
Confidence level is not high.
-- 
Lindsay Mathieson
2017 Sep 21
2
Performance drop from 3.8 to 3.10
...r v info
Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
transport.address-family: inet
cluster.locking-scheme: granular
cluster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off...
2014 Jan 02
0
Dovecot doesn't seem to read userdb for the first delivery
...://wiki2.dovecot.org/MailLocation
-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org
wsFcBAEBCAAQBQJSxRknCRD8UQvv5Zm+EAAAj68P/iidn6jYSNlhLY2640Qg
vPC9xg0lhH/LJLEMMK+oSDnOmF+Hjtm/IMctOVDiTnPfdbvoC9oE4RWnKlL0
/couQQHQKjh2L2mqN50buROjUQyR+pQXGNfVGS+jq74S30299/VnH+gWPUSV
xYRb1i6wuKVGCCNRF3vJHZs1eCiEFKpvC4LUiI5yarclCUE4mCtRO97Iadu/
GTaXn3euy5/5dyZEiPvld5IcGJ0BMC5RJiHb89EpN7A5J+4MvlMNUdcPYgcF
j+ilwsOCpchj1AfjLnosnpCJRysin8aYj9mxGC+gR2/fAIm242qBNrVpnLQM
IjP8G1pKhxKzQx2jFnENvhTKaxDSIWva0u/CSxSiCE3Kepj1b39qsjFlM8k7
0/EzoWf1rxzy0kQQ1qNeoz1Ta93D2cWQkTj6AK6kVmtmt9ia...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...f volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
>
> Is that just the file entry in "/var/lib/glusterd/peers" ?
>
>
> e.g I have:
>
> gluster peer status
> Number of Peers: 3
>
> Hostname: vnh.proxmox.softlog
> Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
> State: Peer in Cluster (Connected)
>
> *Hostname: vna.proxmox.softlog*
> *Uuid: de673495-8cb2-4328-ba00-0419357c03d7*
> *State: Peer in Cluster (Disconnected)*
>
> Hostname: vnb.proxmox.softlog
> Uuid: 43a1bf8...
2017 Sep 22
0
Performance drop from 3.8 to 3.10
...cate
> Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
> Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
> Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4
> Options Reconfigured:
> transport.address-family: inet
> cluster.locking-scheme: granular
> cluster.granular-entry-heal: yes
> features.shard-block-size: 64MB
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.io...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
>
>
> Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
>
> Yes. And please ensure you do this after bringing down all the glusterd
> instances and then once the peer file is removed from all the nodes restart
> glusterd on all the
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down all gluster instances before file removal, you
also bring down the whole gluster
2017 Jun 12
3
How to remove dead peer, osrry urgent again :(
...at 2:12 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
> On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <
> lindsay.mathieson at gmail.com> wrote:
>
>> On 11/06/2017 10:46 AM, WK wrote:
>> > I thought you had removed vna as defective and then ADDED in vnh as
>> > the replacement?
>> >
>> > Why is vna still there?
>>
>> Because I *can't* remove it. It died, was unable to be brought up. The
>> gluster peer detach command only works with live servers - A severe
>> problem IMHO.
>
>
> If th...