similar to: troubleshooting kvm performance on gluster

Displaying 20 results from an estimated 30000 matches similar to: "troubleshooting kvm performance on gluster"

2017 Sep 10
1
GlusterFS as virtual machine storage
Hey guys, I got another "reboot crash" with gfapi and this time libvirt-3.2.1 (from cbs.centos.org). Is there anyone who can audit the libgfapi usage in libvirt? :-) WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so this might be even related to amount of IOPS. -ps On Sun, Sep 10, 2017 at 6:37 AM, WK
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through. ---------- Forwarded message ---------- From: Martin Toth <snowmailer at gmail.com> Date: Thu, Sep 21, 2017 at 9:17 AM Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] To: gluster-users at gluster.org Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com Hello all fellow GlusterFriends, I would like you to comment /
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi, On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: > Pavel. > > Is there a difference between native client (fuse) and libgfapi in regards > to the crashing/read-only behaviour? I switched to FUSE now and the VM crashed (read-only remount) immediately after one node started rebooting. I tried to mount.glusterfs same volume on different server (not VM), running
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting yesterday and now killed another one during FUSE test, so it had to crash immediately (only one of three nodes were actually up). This definitely happened for the first time (only one node had been killed yesterday). Using FUSE seems to be OK with replica 3. So this can be gfapi related or maybe rather libvirt related. I tried
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good. Remember to back up Gluster config files before update: /etc/glusterfs /var/lib/glusterd If you are *not* on the latest 3.7.x, you are unlikely to be able to go back to it because PPA only keeps the latest version of each major branch, so keep that in mind. With Ubuntu, every time you update, make sure to download and keep a manual copy of the .Deb files. Otherwise you
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > Well, that makes me feel better. > > I've seen all these
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi, thanks for suggesions. Yes "gluster peer probe node3? will be first command in order to discover 3rd node by Gluster. I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according https://packages.ubuntu.com/xenial/glusterfs-server <https://packages.ubuntu.com/xenial/glusterfs-server> is 3.7.6-1ubuntu1, so this should be OK. > If you are *not* on
2017 Oct 13
1
small files performance
Where did you read 2k IOPS? Each disk is able to do about 75iops as I'm using SATA disk, getting even closer to 2000 it's impossible Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto: > Depends what you need. > 2K iops for small file writes is not a bad result. > In my case I had a system that was just poorly written and it was >
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/18/2018 6:13 PM, Sam McLeod wrote: Even your NFS transfers are 12.5 or so MB per second or less. 1) Did you use fdisk and LVM under that XFS filesystem? 2) Did you benchmark the XFS with something like bonnie++? (There's probably newer benchmark suites now.) 3) Did you benchmark your Network transfer speeds? Perhaps your NIC negotiated a lower speed. 3) I've done XFS tuning
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2018 Apr 23
0
Gluster + NFS-Ganesha Failover
Hello All, I am trying to setup a three way replicated Gluster Storage which is exported by NFS Ganesha. This 3 node Ganesha cluster is managed by pacemaker and corosync. I want to use this cluster as a backend for several different web-based applications as well as storage for mailboxes. The cluster is working well but after triggering the failover by stopping the ganesha service on one node,
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set cluster.server-quorum-ratio 51% On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi all, > > I have promised to do some testing and I finally find some time and > infrastructure. > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > replicated volume with arbiter (2+1) and VM on KVM (via
2013 Oct 17
0
Gluster Community Congratulates OpenStack Developers on Havana Release
The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community.
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017