search for: gfapi

Displaying 20 results from an estimated 79 matches for "gfapi".

2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote: > On 4/9/2018 2:45 AM, Alex K wrote: > Hey Alex, > > With two nodes, the setup works but both sides go down when one node is > missing. Still I set the below two params to none and that solved my issue: > > cluster.quorum-type: none > cluster.server-quorum-type: none > > yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote: Hey Alex, With two nodes, the setup works but both sides go down when one node is missing. Still I set the below two params to none and that solved my issue: cluster.quorum-type: none cluster.server-quorum-type: none Thank you for that. Cheers, Tom > Hi, > > You need 3 nodes at least to have quorum enabled. In 2 node setup you > need to
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...ew of other errors in apps using the gluster volume. These errors include: 02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.my.dom : ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM ==> ganesha-gfapi.log <== [2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 0-gv01-client-0: current graph is no longer active, destroying rpc_client [2018-05-03 04:32:18.009338] I [MSGID: 114021] [client.c:2369:notify] 0-gv01-client-1: current graph is no longer active, destroying rpc_clien...
2013 Dec 18
1
gfapi from non-root
How does one run a gfapi app without being root? I've set server.allow-insecure on on the server side (and bounced all gluster processes). Is there something else required? My test program just stats a file on the cluster volume. It works as root and fails as a normal user. Local log file shows a message about fai...
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put
2017 Sep 09
2
GlusterFS as virtual machine storage
...illed another one during FUSE test, so it had to >> crash immediately (only one of three nodes were actually up). This >> definitely happened for the first time (only one node had been killed >> yesterday). >> >> Using FUSE seems to be OK with replica 3. So this can be gfapi related >> or maybe rather libvirt related. >> >> I tried ioengine=gfapi with fio and job survived reboot. >> >> >> -ps >> > > So, to recap: > - with gfapi, your VMs crashes/mount read-only with a single node failure; > - with gpapi also, fio se...
2014 Mar 20
1
Optimizing Gluster (gfapi) for high IOPS
Hey folks, We've been running VM's on qemu using a replicated gluster volume connecting using gfapi and things have been going well for the most part. Something we've noticed though is that we have problems with many concurrent disk operations and disk latency. The latency gets bad enough that the process eats the cpu and the entire machine stalls. The place where we've seen it the worst...
2017 Sep 09
0
GlusterFS as virtual machine storage
...yesterday and now killed another one during FUSE test, so it had to > crash immediately (only one of three nodes were actually up). This > definitely happened for the first time (only one node had been killed > yesterday). > > Using FUSE seems to be OK with replica 3. So this can be gfapi related > or maybe rather libvirt related. > > I tried ioengine=gfapi with fio and job survived reboot. > > > -ps So, to recap: - with gfapi, your VMs crashes/mount read-only with a single node failure; - with gpapi also, fio seems to have no problems; - with native FUSE clie...
2017 Sep 09
2
GlusterFS as virtual machine storage
...the node I was shutting yesterday and now killed another one during FUSE test, so it had to crash immediately (only one of three nodes were actually up). This definitely happened for the first time (only one node had been killed yesterday). Using FUSE seems to be OK with replica 3. So this can be gfapi related or maybe rather libvirt related. I tried ioengine=gfapi with fio and job survived reboot. -ps On Sat, Sep 9, 2017 at 8:05 AM, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi, > > On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: >> Pavel....
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...ing one with another XFS on top, and only with RAW, and only inside the VM. So it isn?t like data is being corrupted. However, it?s hard to replace a filesystem with another like you would do if you re-install one of what may be several operating systems on that disk image. I am interested in your GFAPI information. I rebuilt RHEL9.4 qemu and changed the spec file to produce the needed gluster block package, and referred to the image file via the gluster protocol. My system got horrible scsi errors and sometimes didn?t even boot from a live environment. I repeated the same failure with sles15. I d...
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > Well, tha...
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hey Erik, I am running a similar setup with no issues having Ubuntu Host Systems on HPE DL380 Gen 10. I however used to run libvirt/qemu via nfs-ganesha on top of gluster flawlessly. Recently I upgraded to the native GFAPI implementation, which is poorly documented with snippets all over the internet. Although I cannot provide a direct solution for your issue, I am however suggesting to try either nfs-ganesha as a replacement for fuse mount or GFAPI. Happy to share libvirt/GFAPI config hints to make it happen. Bes...
2017 Sep 09
0
GlusterFS as virtual machine storage
Well, that makes me feel better. I've seen all these stories here and on Ovirt recently about VMs going read-only, even on fairly simply layouts. Each time, I've responded that we just don't see those issues. I guess the fact that we were lazy about switching to gfapi turns out to be a potential explanation <grin> -wk On 9/9/2017 6:49 AM, Pavel Szalbot wrote: > Yes, this is my observation so far. > > On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it > <mailto:g.danti at assyoma.it>> wrote: > > &g...
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as migration, snaps etc, so there hasn't been a compelling need to squeeze out a few more I/Os. On 9/9/2017 3:08 PM, lemonnierk at ulrar.net wrot...
2017 Oct 02
0
nfs-ganesha locking problems
...y in use > Linux-x86_64 Error: 37: No locks available > Additional information: 10 > ORA-27037: unable to obtain file status > Linux-x86_64 Error: 2: No such file or directory > Additional information: 3 > Do you see any errors/warnings in any of the logs - ganesha.log, ganesha-gfapi.log and brick logs? Also if the issue is reproducible, please collect tcpdump for that duration on the node where nfs-ganesha server is running. Thanks, Soumya
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID: 114020] [client.c:2356:notify] 0-testvol-client-0: parent t...
2017 Oct 02
1
nfs-ganesha locking problems
...Oracle Servers Clients = 10.30.29.125,10.30.28.25,10.30.28.64,10.30.29.123,10.30.28.21,10.30.28.81,10.30.29.124,10.30.28.82,10.30.29.111; Access_Type = RW; } } [root at chvirnfsprd12 etc]# [root at chvirnfsprd12 log]# grep '^\[2017-10-02 [12]' ganesha-gfapi.log [2017-10-02 18:49:12.855174] I [MSGID: 104043] [glfs-mgmt.c:565:glfs_mgmt_getspec_cbk] 0-gfapi: No change in volfile, continuing [2017-10-02 18:49:12.862051] I [MSGID: 104043] [glfs-mgmt.c:565:glfs_mgmt_getspec_cbk] 0-gfapi: No change in volfile, continuing [2017-10-02 18:50:05.789064] E [socke...
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
...rfs.so.0.0.1[7f716d8b9000+f1000] [14531.582667] ganesha.nfsd[17025]: segfault at 0 ip 00007f7cb8fa8b00 sp 00007f7c5878d5d0 error 4 in libglusterfs.so.0.0.1[7f7cb8f6c000+f1000] ganesha-fgapi.log shows the following errors: [2018-01-18 17:24:00.146094] W [inode.c:1341:inode_parent] (-->/lib64/libgfapi.so.0(glfs_resolve_at+0x278) [0x7f7cb927f0b8] -->/lib64/libglusterfs.so.0(glusterfs_normalize_dentry+0x8e) [0x7f7cb8fa8aee] -->/lib64/libglusterfs.so.0(inode_parent+0xda) [0x7f7cb8fa670a] ) 0-gfapi: inode not found [2018-01-18 17:24:00.146210] E [inode.c:2567:inode_parent_null_check] (-->/l...
2018 Jan 19
0
Segfaults after upgrade to GlusterFS 3.10.9
Hi Frank, It will be very easy to debug if u have core file with u. It looks like crash is coming from gfapi stack. If there is core file can u please share bt of the core file. Regards, Jiffin On Thursday 18 January 2018 11:18 PM, Frank Wall wrote: > Hi, > > after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time: > > [12407.918249] ganesha.nfsd[38104]: segfault...
2024 Aug 12
0
Creating a large pre-allocated qemu-img raw image takes too long and fails on fuse
...mple raw image but it can be up to 40T in size in some cases. For this experiment we?ll call it 24T. When creating the image on fuse with qemu-img, using falloc preallocation, the qemu-img create fails and a fuse error results. This happens after around 3 hours. I created a simple C program using gfapi that does the fallocate of 10T and it to 1.25 hours. I didn?t run tests at larger than that as 1.25 hours is too long anyway. Using qemu-img in prellocation-falloc gfapi mode takes a long time too ? similar to qemu-img in gfapi mode. However, I found if I create a 2.4T image file and then do 9 mo...