Displaying 20 results from an estimated 10000 matches similar to: "Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]"
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good.
Remember to back up Gluster config files before update:
/etc/glusterfs
/var/lib/glusterd
If you are *not* on the latest 3.7.x, you are unlikely to be able to go
back to it because PPA only keeps the latest version of each major branch,
so keep that in mind. With Ubuntu, every time you update, make sure to
download and keep a manual copy of the .Deb files. Otherwise you
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through.
---------- Forwarded message ----------
From: Martin Toth <snowmailer at gmail.com>
Date: Thu, Sep 21, 2017 at 9:17 AM
Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
To: gluster-users at gluster.org
Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com
Hello all fellow GlusterFriends,
I would like you to comment /
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi,
thanks for suggesions. Yes "gluster peer probe node3? will be first command in order to discover 3rd node by Gluster.
I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according https://packages.ubuntu.com/xenial/glusterfs-server <https://packages.ubuntu.com/xenial/glusterfs-server> is 3.7.6-1ubuntu1, so this should be OK.
> If you are *not* on
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we haven't had time to work on
libgfapi yet. Maybe that is more tolerant.
We have linux VMs mostly
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi,
On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote:
> Pavel.
>
> Is there a difference between native client (fuse) and libgfapi in regards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mount.glusterfs same volume on different server (not VM),
running
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly
substantial drop in read/write perfomance
env:
- 3 node, replica 3 cluster
- Private dedicated Network: 1Gx3, bond: balance-alb
- was able to down the volume for the upgrade and reboot each node
- Usage: VM Hosting (qemu)
- Sharded Volume
- sequential read performance in VM's has dropped from 700Mbps to 300mbs
- Seq Write
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update.
I've setup a replica 3 volume without sharding and tried to install a VM
on a qcow2 volume on that device; however the result is the same and the
vm image has been corrupted, exactly at the same point.
Here's the volume info of the create volume:
Volume Name: gvtest
Type: Replicate
Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
Status: Started
Snapshot Count: 0
Number
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only
with libgfapi.
-Krutika
On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Another update.
>
> I've setup a replica 3 volume without sharding and tried to install a VM
> on a qcow2 volume on that device; however the result is the same and the vm
>
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability
:-) I've found that with
performance.write-behind off
the vm works without problem. Now I'll try with write-behind on and
flush-behind on too.
Il 18/01/2018 13:30, Krutika Dhananjay ha scritto:
> Thanks for that input. Adding Niels since the issue is reproducible
> only with libgfapi.
>
>
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder
to achieve than with just replica 2 + arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
>
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volume with arbiter (2+1) and VM on KVM (via
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Hi,
after our IRC chat I've rebuilt a virtual machine with FUSE based
virtual disk. Everything worked flawlessly.
Now I'm sending you the output of the requested getfattr command on the
disk image:
# file: TestFUSE-vda.qcow2
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x40ffafbbe987445692bb31295fa40105
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems to be OK with replica 3. So this can be gfapi related
or maybe rather libvirt related.
I tried
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy,
The heal command is basically used to heal any mismatching contents between
replica copies of the files.
For the command "gluster volume heal <volname>" to succeed, you should have
the self-heal-daemon running,
which is true only if your volume is of type replicate/disperse.
In your case you have a plain distribute volume where you do not store the
replica of any
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica.
Thanks
Jose
[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process