search for: gfapi

Displaying 20 results from an estimated 75 matches for "gfapi".

2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote: > On 4/9/2018 2:45 AM, Alex K wrote: > Hey Alex, > > With two nodes, the setup works but both sides go down when one node is > missing. Still I set the below two params to none and that solved my issue: > > cluster.quorum-type: none > cluster.server-quorum-type: none > > yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote: Hey Alex, With two nodes, the setup works but both sides go down when one node is missing. Still I set the below two params to none and that solved my issue: cluster.quorum-type: none cluster.server-quorum-type: none Thank you for that. Cheers, Tom > Hi, > > You need 3 nodes at least to have quorum enabled. In 2 node setup you > need to
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...ew of other errors in apps using the gluster volume. These errors include: 02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.my.dom : ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM ==> ganesha-gfapi.log <== [2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 0-gv01-client-0: current graph is no longer active, destroying rpc_client [2018-05-03 04:32:18.009338] I [MSGID: 114021] [client.c:2369:notify] 0-gv01-client-1: current graph is no longer active, destroying rpc_clien...
2013 Dec 18
1
gfapi from non-root
How does one run a gfapi app without being root? I've set server.allow-insecure on on the server side (and bounced all gluster processes). Is there something else required? My test program just stats a file on the cluster volume. It works as root and fails as a normal user. Local log file shows a message about fai...
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put
2017 Sep 09
2
GlusterFS as virtual machine storage
...illed another one during FUSE test, so it had to >> crash immediately (only one of three nodes were actually up). This >> definitely happened for the first time (only one node had been killed >> yesterday). >> >> Using FUSE seems to be OK with replica 3. So this can be gfapi related >> or maybe rather libvirt related. >> >> I tried ioengine=gfapi with fio and job survived reboot. >> >> >> -ps >> > > So, to recap: > - with gfapi, your VMs crashes/mount read-only with a single node failure; > - with gpapi also, fio se...
2014 Mar 20
1
Optimizing Gluster (gfapi) for high IOPS
Hey folks, We've been running VM's on qemu using a replicated gluster volume connecting using gfapi and things have been going well for the most part. Something we've noticed though is that we have problems with many concurrent disk operations and disk latency. The latency gets bad enough that the process eats the cpu and the entire machine stalls. The place where we've seen it the worst...
2017 Sep 09
0
GlusterFS as virtual machine storage
...yesterday and now killed another one during FUSE test, so it had to > crash immediately (only one of three nodes were actually up). This > definitely happened for the first time (only one node had been killed > yesterday). > > Using FUSE seems to be OK with replica 3. So this can be gfapi related > or maybe rather libvirt related. > > I tried ioengine=gfapi with fio and job survived reboot. > > > -ps So, to recap: - with gfapi, your VMs crashes/mount read-only with a single node failure; - with gpapi also, fio seems to have no problems; - with native FUSE clie...
2017 Sep 09
2
GlusterFS as virtual machine storage
...the node I was shutting yesterday and now killed another one during FUSE test, so it had to crash immediately (only one of three nodes were actually up). This definitely happened for the first time (only one node had been killed yesterday). Using FUSE seems to be OK with replica 3. So this can be gfapi related or maybe rather libvirt related. I tried ioengine=gfapi with fio and job survived reboot. -ps On Sat, Sep 9, 2017 at 8:05 AM, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi, > > On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: >> Pavel....
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > Well, tha...
2017 Sep 09
0
GlusterFS as virtual machine storage
Well, that makes me feel better. I've seen all these stories here and on Ovirt recently about VMs going read-only, even on fairly simply layouts. Each time, I've responded that we just don't see those issues. I guess the fact that we were lazy about switching to gfapi turns out to be a potential explanation <grin> -wk On 9/9/2017 6:49 AM, Pavel Szalbot wrote: > Yes, this is my observation so far. > > On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it > <mailto:g.danti at assyoma.it>> wrote: > > &g...
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as migration, snaps etc, so there hasn't been a compelling need to squeeze out a few more I/Os. On 9/9/2017 3:08 PM, lemonnierk at ulrar.net wrot...
2017 Oct 02
0
nfs-ganesha locking problems
...y in use > Linux-x86_64 Error: 37: No locks available > Additional information: 10 > ORA-27037: unable to obtain file status > Linux-x86_64 Error: 2: No such file or directory > Additional information: 3 > Do you see any errors/warnings in any of the logs - ganesha.log, ganesha-gfapi.log and brick logs? Also if the issue is reproducible, please collect tcpdump for that duration on the node where nfs-ganesha server is running. Thanks, Soumya
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID: 114020] [client.c:2356:notify] 0-testvol-client-0: parent t...
2017 Oct 02
1
nfs-ganesha locking problems
...Oracle Servers Clients = 10.30.29.125,10.30.28.25,10.30.28.64,10.30.29.123,10.30.28.21,10.30.28.81,10.30.29.124,10.30.28.82,10.30.29.111; Access_Type = RW; } } [root at chvirnfsprd12 etc]# [root at chvirnfsprd12 log]# grep '^\[2017-10-02 [12]' ganesha-gfapi.log [2017-10-02 18:49:12.855174] I [MSGID: 104043] [glfs-mgmt.c:565:glfs_mgmt_getspec_cbk] 0-gfapi: No change in volfile, continuing [2017-10-02 18:49:12.862051] I [MSGID: 104043] [glfs-mgmt.c:565:glfs_mgmt_getspec_cbk] 0-gfapi: No change in volfile, continuing [2017-10-02 18:50:05.789064] E [socke...
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
...rfs.so.0.0.1[7f716d8b9000+f1000] [14531.582667] ganesha.nfsd[17025]: segfault at 0 ip 00007f7cb8fa8b00 sp 00007f7c5878d5d0 error 4 in libglusterfs.so.0.0.1[7f7cb8f6c000+f1000] ganesha-fgapi.log shows the following errors: [2018-01-18 17:24:00.146094] W [inode.c:1341:inode_parent] (-->/lib64/libgfapi.so.0(glfs_resolve_at+0x278) [0x7f7cb927f0b8] -->/lib64/libglusterfs.so.0(glusterfs_normalize_dentry+0x8e) [0x7f7cb8fa8aee] -->/lib64/libglusterfs.so.0(inode_parent+0xda) [0x7f7cb8fa670a] ) 0-gfapi: inode not found [2018-01-18 17:24:00.146210] E [inode.c:2567:inode_parent_null_check] (-->/l...
2018 Jan 19
0
Segfaults after upgrade to GlusterFS 3.10.9
Hi Frank, It will be very easy to debug if u have core file with u. It looks like crash is coming from gfapi stack. If there is core file can u please share bt of the core file. Regards, Jiffin On Thursday 18 January 2018 11:18 PM, Frank Wall wrote: > Hi, > > after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time: > > [12407.918249] ganesha.nfsd[38104]: segfault...
2017 Sep 10
1
GlusterFS as virtual machine storage
Hey guys, I got another "reboot crash" with gfapi and this time libvirt-3.2.1 (from cbs.centos.org). Is there anyone who can audit the libgfapi usage in libvirt? :-) WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so this might be even related to amount of IOPS. -...
2013 Sep 10
4
compiling samba vfs module
hi All, The system is Ubuntu 12.04 I download and extracted source packages of samba and glusterfs and I built glusterfs, so I get the right necessary structure: glusterfs version is 3.4 and it's from ppa. # ls /data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h /data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h Unfortunately I'm
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...ucture setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume Current Versions : glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [package] 2.5+dfsg-5ubuntu10.2glusterfs3.7.14xenial1 What we need : (New versions) - upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are EOL - see https:...