Displaying 20 results from an estimated 6000 matches similar to: "Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]"
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through.
---------- Forwarded message ----------
From: Martin Toth <snowmailer at gmail.com>
Date: Thu, Sep 21, 2017 at 9:17 AM
Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
To: gluster-users at gluster.org
Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com
Hello all fellow GlusterFriends,
I would like you to comment /
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good.
Remember to back up Gluster config files before update:
/etc/glusterfs
/var/lib/glusterd
If you are *not* on the latest 3.7.x, you are unlikely to be able to go
back to it because PPA only keeps the latest version of each major branch,
so keep that in mind. With Ubuntu, every time you update, make sure to
download and keep a manual copy of the .Deb files. Otherwise you
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi,
thanks for suggesions. Yes "gluster peer probe node3? will be first command in order to discover 3rd node by Gluster.
I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according https://packages.ubuntu.com/xenial/glusterfs-server <https://packages.ubuntu.com/xenial/glusterfs-server> is 3.7.6-1ubuntu1, so this should be OK.
> If you are *not* on
2017 Oct 01
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi Diego,
I?ve tried to upgrade and then extend gluster with 3rd node in virtualbox test environment and all went without problems.
Sharding will not help me at this time so I will consider upgrading 1G to 10G before this procedure in production. That should lower downtime - healing time of VM image files on Gluster.
I hope healing will take as short as possible on 10G.
Additional info for
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
Hi,
after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:
[12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 00007f872425fb00 sp 00007f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000]
[12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 00007f716d8f5b00 sp 00007f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000]
[14531.582667]
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we haven't had time to work on
libgfapi yet. Maybe that is more tolerant.
We have linux VMs mostly
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi,
On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote:
> Pavel.
>
> Is there a difference between native client (fuse) and libgfapi in regards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mount.glusterfs same volume on different server (not VM),
running
2017 Sep 10
1
GlusterFS as virtual machine storage
Hey guys,
I got another "reboot crash" with gfapi and this time libvirt-3.2.1
(from cbs.centos.org). Is there anyone who can audit the libgfapi
usage in libvirt? :-)
WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O
situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so
this might be even related to amount of IOPS.
-ps
On Sun, Sep 10, 2017 at 6:37 AM, WK
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>)
We are also typically on a somewhat slower GlusterFS LAN network (bonded
2x1G, jumbo frames) so that may be a factor.
I'll try to setup a trusted pool to test libgfapi soon.
I'm curious as to how much faster it is, but the fuse mount is fast
enough, dirt simple to use, and just works on all VM ops such as
2014 Apr 06
2
libgfapi failover problem on replica bricks
Hello,
I'm having an issue with rebooting bricks holding images for live KVM
machines (using libgfapi).
I have a replicated+distributed setup of 4 bricks (2x2). The cluster
contains images for a couple of kvm virtual machines.
My problem is that when I reboot a brick containing a an image of a
VM, the VM will start throwing disk errors and eventually die.
The gluster volume is made like
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly
fine. And trust me, there had been A LOT of various crashes, reboots and
kill of nodes.
Maybe it's a version thing ? A new bug in the new gluster releases that
doesn't affect our 3.7.15.
On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote:
> Well, that makes me feel better.
>
> I've seen all these
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems to be OK with replica 3. So this can be gfapi related
or maybe rather libvirt related.
I tried
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi Martin,
> Do you mean latest package from Ubuntu repository or latest package from
> Gluster PPA (3.7.20-ubuntu1~xenial1).
> Currently I am using Ubuntu repository package, but want to use PPA for
> upgrade because Ubuntu has old packages of Gluster in repo.
When you switch to PPA, make sure to download and keep a copy of each
set of gluster deb packages, otherwise if you ever
2018 Jan 19
0
Segfaults after upgrade to GlusterFS 3.10.9
Hi Frank,
It will be very easy to debug if u have core file with u. It looks like
crash is coming from gfapi stack.
If there is core file can u please share bt of the core file.
Regards,
Jiffin
On Thursday 18 January 2018 11:18 PM, Frank Wall wrote:
> Hi,
>
> after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:
>
> [12407.918249] ganesha.nfsd[38104]:
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi,
After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
[2016-11-02 14:26:41.864075] I [MSGID:
2013 Oct 11
1
libvirt and libgfapi in RHEL 6.5 beta
Dear All,
Very pleased to see that the Redhat 6.5 beta promises "Native Support
for GlusterFS in QEMU allows native access to GlusterFS volumes using
the libgfapi library"
Can I ask if virt-manager & libvirt can control libgfapi mounts? :)
or do I need to use ovirt? :(
many thanks
Jake
2018 Apr 23
0
Gluster + NFS-Ganesha Failover
Hello All,
I am trying to setup a three way replicated Gluster Storage which is
exported by NFS Ganesha.
This 3 node Ganesha cluster is managed by pacemaker and corosync. I want
to use this cluster as a backend for several different web-based
applications as well as storage for mailboxes.
The cluster is working well but after triggering the failover by
stopping the ganesha service on one node,
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only
with libgfapi.
-Krutika
On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Another update.
>
> I've setup a replica 3 volume without sharding and tried to install a VM
> on a qcow2 volume on that device; however the result is the same and the vm
>