similar to: Optimizing Gluster (gfapi) for high IOPS

Displaying 20 results from an estimated 1000 matches similar to: "Optimizing Gluster (gfapi) for high IOPS"

2013 Dec 18
1
gfapi from non-root
How does one run a gfapi app without being root? I've set server.allow-insecure on on the server side (and bounced all gluster processes). Is there something else required? My test program just stats a file on the cluster volume. It works as root and fails as a normal user. Local log file shows a message about failing to bind a privileged port. -K -------------- next part --------------
2017 Sep 10
1
GlusterFS as virtual machine storage
Hey guys, I got another "reboot crash" with gfapi and this time libvirt-3.2.1 (from cbs.centos.org). Is there anyone who can audit the libgfapi usage in libvirt? :-) WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so this might be even related to amount of IOPS. -ps On Sun, Sep 10, 2017 at 6:37 AM, WK
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > Well, that makes me feel better. > > I've seen all these
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote: Hey Guy's, Returning to this topic, after disabling the the quorum: cluster.quorum-type: none cluster.server-quorum-type: none I've ran into a number of gluster errors (see below). I'm using gluster as the backend for my NFS storage. I have gluster running on two nodes, nfs01 and nfs02. It's mounted on /n on each host. The path /n is
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting: ??????????????????? Before using gfapi: ]# dd if=/dev/urandom of=test.file bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2013 Mar 18
2
Disk iops performance scalability
Hi, Seeing a drop-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes
2014 Nov 18
1
Storage IOPs Calculation for Qmail Server
Dear DovecotORG, In my organization, we are about to implement Qmail Server. * The number of current users will be 800, in future it may increase upto 1200. * The number of concurrent users will be 300. I am the engineer to deploy the Qmail in Linux server. I need to tell the storage team on the IOPs requirement. I requested 8TB usable space for the mail storage (can
2004 Feb 26
0
Iops Vorbis player
Hi, does anyone have, or know much about, the Iops portable players? Are they better/worse than the players from iriver? I am looking for a USB-thumb-drive style of music player (that plays vorbis of course). >From what I have been able to gather from the Iops website, it suits this purpose, but the site is all in Korean. I was hoping to find someone who had actually purchased one of these
2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks. I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi What are the disk IOPS required for Asterisk call recording? I am trying to find out number of disks required in RAID array to record 500 calls. Is there any formula to calculate IOPS required by Asterisk call recording? This will help me to find IOPS for different scale. If I assume that Asterisk will write data on disk every second for each call, I will need disk array to support minimum
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2017 Sep 09
0
GlusterFS as virtual machine storage
Il 09-09-2017 09:09 Pavel Szalbot ha scritto: > Sorry, I did not start the glusterfsd on the node I was shutting > yesterday and now killed another one during FUSE test, so it had to > crash immediately (only one of three nodes were actually up). This > definitely happened for the first time (only one node had been killed > yesterday). > > Using FUSE seems to be OK with
2017 Sep 09
2
GlusterFS as virtual machine storage
Yes, this is my observation so far. On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it> wrote: > Il 09-09-2017 09:09 Pavel Szalbot ha scritto: > >> Sorry, I did not start the glusterfsd on the node I was shutting >> yesterday and now killed another one during FUSE test, so it had to >> crash immediately (only one of three nodes were actually
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle writes are extremely slow (high latency). Below is a snippet of iostat: r/s w/s
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID:
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system from reading the ganesha wiki I have the impression that it is possible to change the log level without restarting ganesha. I was playing with dbus-send but so far was unsuccessful. if you can help me with that, this would be great. here some details about the tested machines. the nfs client
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017