Displaying 16 results from an estimated 16 matches for "vnb".
Did you mean:
nb
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...or issues this morning that I haven't resolve yet with all
> VM's going offline when I rebooted a node which I *hope * was due to
> quorum issues as I now have four peers in the cluster, one dead, three
> live.
>
>
Lets see:
According to your previous note, you had vna, vnb and vng all replica 3
in a working cluster.
vna died so you had two 'good' nodes left. All was good.
You replaced vna with vnd but it is probably not fully healed yet cuz
you had 3.8T worth of chunks to copy.
So you had two good nodes (vnb and vng) working and you rebooted one of
the...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:01 AM, WK wrote:
> You replaced vna with vnd but it is probably not fully healed yet cuz
> you had 3.8T worth of chunks to copy.
No, the heal had completed. Finished about 9 hours before I rebooted.
>
> So you had two good nodes (vnb and vng) working and you rebooted one
> of them?
Three good nodes - vnb, vng, vnh and one dead - vna
from node vng:
root at vng:~# gluster peer status
Number of Peers: 3
Hostname: vna.proxmox.softlog
Uuid: de673495-8cb2-4328-ba00-0419357c03d7
State: Peer in Cluster (Disconnected)
Hostname...
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all VM's
going offline when I rebooted a node which I *hope * was due to quorum
issues as I now have four peers in the cluster, one dead, three live.
Confidence level is not high.
--
Lindsay Mathieson
2004 Aug 06
3
more on building with lame
On Tuesday, 26 June 2001 at 11:15, Denys Sene dos Santos wrote:
>
> I'm running Ices 0.1.0, but the reencoding don't works
> with lame3.88. I prefer 3.88, it's more clean, but with the 3.86
> that the reencoding works.
How did you get it to build? ices 0.1.0 explicitly checks version
info, and besides that makes library calls that don't exist in 3.86.
you are
2004 Aug 06
2
more on building with lame
...k? :)
lame 3.88 works fine. I don't understand what you mean when you say
lame doesn't support reencoding?
> in which case, veriosn 0.1.0 of ices doesnt't support reencoding?
Yes it does.
> ...can someone confirm/deny that ANY version of lame/ices will reencode
> from a VNB file to a fixed-rate stream?
What's a VNB file?
jack.
--- >8 ----
List archives: http://www.xiph.org/archives/
icecast project homepage: http://www.icecast.org/
To unsubscribe from this list, send a message to 'icecast-request@xiph.org'
containing only the word 'unsubscribe...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote:
>>
>
> Three good nodes - vnb, vng, vnh and one dead - vna
>
> from node vng:
>
> root at vng:~# gluster peer status
> Number of Peers: 3
>
> Hostname: vna.proxmox.softlog
> Uuid: de673495-8cb2-4328-ba00-0419357c03d7
> State: Peer in Cluster (Disconnected)
>
> Hostname: vnb.proxmox.softlog
>...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...Number of Peers: 3
Hostname: vnh.proxmox.softlog
Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
State: Peer in Cluster (Connected)
*Hostname: vna.proxmox.softlog**
**Uuid: de673495-8cb2-4328-ba00-0419357c03d7**
**State: Peer in Cluster (Disconnected)**
*
Hostname: vnb.proxmox.softlog
Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36
State: Peer in Cluster (Connected)
Do I just:
rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7
On all the live nodes and restart glusterdd? nothing else?
thanks.
--
Lindsay Mathieson
-------------- next p...
2017 Sep 21
2
Performance drop from 3.8 to 3.10
...t from increasing the op version I made no changes to the volume
settings.
op.version is 31004
gluster v info
Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
transport.address-family: inet
cluster.locking-scheme: granular
cluster.granular-entry-heal: yes
features.shard-block-size: 64MB
netw...
2004 Aug 06
0
more on building with lame
...than 3.88?
BUT
versions about 3.86 don't support reencoding?
in which case, veriosn 0.1.0 of ices doesnt't support reencoding?
that's useful to know (in an 'i just wasted half a day kind of way ;-)
...can someone confirm/deny that ANY version of lame/ices will reencode
from a VNB file to a fixed-rate stream?
(and hopefully will cut/paste some of this into the FAQ?)
--- >8 ----
List archives: http://www.xiph.org/archives/
icecast project homepage: http://www.icecast.org/
To unsubscribe from this list, send a message to 'icecast-request@xiph.org'
containing only...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...er of Peers: 3
>
> Hostname: vnh.proxmox.softlog
> Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
> State: Peer in Cluster (Connected)
>
> *Hostname: vna.proxmox.softlog*
> *Uuid: de673495-8cb2-4328-ba00-0419357c03d7*
> *State: Peer in Cluster (Disconnected)*
>
> Hostname: vnb.proxmox.softlog
> Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36
> State: Peer in Cluster (Connected)
>
> Do I just:
>
> rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7
>
>
Yes. And please ensure you do this after bringing down all the glusterd
instances and then...
2017 Sep 22
0
Performance drop from 3.8 to 3.10
...ettings.
>
> op.version is 31004
>
> gluster v info
>
> Volume Name: datastore4
> Type: Replicate
> Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
> Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
> Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4
> Options Reconfigured:
> transport.address-family: inet
> cluster.locking-scheme: granular
> cluster.granular-entry-heal: yes
> f...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
>
>
> Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
>
> Yes. And please ensure you do this after bringing down all the glusterd
> instances and then once the peer file is removed from all the nodes restart
> glusterd on all the
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down all gluster instances before file removal, you
also bring down the whole gluster
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
...mox.softlog:/tank/vmdata/datastore4
Would that be all that is required?
Existing setup below:
gluster v info
Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
cluster.locking-scheme: granular
cluster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster....
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as
> the replacement?
>
> Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem IMHO.
--
Lindsay Mathieson