Displaying 20 results from an estimated 24 matches for "softlogic".
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage
Or I should say we *had* a 3 node cluster, one node died today. Possibly I
can recover it, in whcih case no issues, we just let it heal itself. For
now its running happily on 2 nodes with no data loss - gluster for teh win!
But its looking like I might have to replace the
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:01 AM, WK wrote:
> You replaced vna with vnd but it is probably not fully healed yet cuz
> you had 3.8T worth of chunks to copy.
No, the heal had completed. Finished about 9 hours before I rebooted.
>
> So you had two good nodes (vnb and vng) working and you rebooted one
> of them?
Three good nodes - vnb, vng, vnh and one dead - vna
from node vng:
root at
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
gluster peer status
Number of Peers: 3
Hostname: vnh.proxmox.softlog
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote:
> > I'm thinking the following:
> >
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think that should work
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly
substantial drop in read/write perfomance
env:
- 3 node, replica 3 cluster
- Private dedicated Network: 1Gx3, bond: balance-alb
- was able to down the volume for the upgrade and reboot each node
- Usage: VM Hosting (qemu)
- Sharded Volume
- sequential read performance in VM's has dropped from 700Mbps to 300mbs
- Seq Write
2017 Jun 13
1
How to remove dead peer, osrry urgent again :(
On Tue, 13 Jun 2017 at 06:39, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
>
> On 13 June 2017 at 02:56, Pranith Kumar Karampuri <pkarampu at redhat.com>
> wrote:
>
>> We can also do "gluster peer detach <hostname> force right?
>
>
>
> Just to be sure I setup a test 3 node vm gluster cluster :) then shut down
> one of the nodes
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following:
>
> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
>
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4
I think that should work perfectly fine yes, either that
or directly use replace-brick ?
-------------- next part --------------
A non-text
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
>
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think that should work perfectly fine yes, either that
> or
2017 Jun 13
0
How to remove dead peer, osrry urgent again :(
On 13 June 2017 at 02:56, Pranith Kumar Karampuri <pkarampu at redhat.com>
wrote:
> We can also do "gluster peer detach <hostname> force right?
Just to be sure I setup a test 3 node vm gluster cluster :) then shut down
one of the nodes and tried to remove it.
root at gh1:~# gluster peer status
Number of Peers: 2
Hostname: gh2.brian.softlog
Uuid:
2017 Jun 12
3
How to remove dead peer, osrry urgent again :(
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
> On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <
> lindsay.mathieson at gmail.com> wrote:
>
>> On 11/06/2017 10:46 AM, WK wrote:
>> > I thought you had removed vna as defective and then ADDED in vnh as
>> > the replacement?
>> >
>> > Why is vna
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote:
> Since my node died on friday I have a dead peer (vna) that needs to be
> removed.
>
>
> I had major issues this morning that I haven't resolve yet with all
> VM's going offline when I rebooted a node which I *hope * was due to
> quorum issues as I now have four peers in the cluster, one dead, three
> live.
>
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote:
>>
>
> Three good nodes - vnb, vng, vnh and one dead - vna
>
> from node vng:
>
> root at vng:~# gluster peer status
> Number of Peers: 3
>
> Hostname: vna.proxmox.softlog
> Uuid: de673495-8cb2-4328-ba00-0419357c03d7
> State: Peer in Cluster (Disconnected)
>
> Hostname: vnb.proxmox.softlog
> Uuid:
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:26, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
>
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
>
> Is that just the file entry in
2017 Sep 22
0
Performance drop from 3.8 to 3.10
Could you disable cluster.eager-lock and try again?
-Krutika
On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial
> drop in read/write perfomance
>
> env:
>
> - 3 node, replica 3 cluster
>
> - Private dedicated Network: 1Gx3, bond: balance-alb
>
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all VM's
going offline when I rebooted a node which I *hope * was due to quorum
issues as I now have four peers in the cluster, one dead, three live.
Confidence level is not high.
--
Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as
> the replacement?
>
> Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem IMHO.
--
Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
>
>
> Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
>
> Yes. And please ensure you do this after bringing down all the glusterd
> instances and then once the peer file is removed from all the nodes restart
> glusterd on all the
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down all gluster instances before file removal, you
also bring down the whole gluster
2020 Mar 11
0
[PATCH -next 000/491] treewide: use fallthrough;
...E SENSOR DRIVER: Use fallthrough;
BTTV VIDEO4LINUX DRIVER: Use fallthrough;
CX88 VIDEO4LINUX DRIVER: Use fallthrough;
MEDIA DRIVERS FOR DIGITAL DEVICES PCIE DEVICES: Use fallthrough;
MOTION EYE VAIO PICTUREBOOK CAMERA DRIVER: Use fallthrough;
SAA7134 VIDEO4LINUX DRIVER: Use fallthrough;
SOFTLOGIC 6x10 MPEG CODEC: Use fallthrough;
CODA V4L2 MEM2MEM DRIVER: Use fallthrough;
SAMSUNG S5P/EXYNOS4 SOC SERIES CAMERA SUBSYSTEM DRIVERS: Use
fallthrough;
CAFE CMOS INTEGRATED CAMERA CONTROLLER DRIVER: Use fallthrough;
OMAP IMAGING SUBSYSTEM (OMAP3 ISP and OMAP4 ISS): Use fallthrough;
VIC...