similar to: FUSE init failed

Displaying 20 results from an estimated 4000 matches similar to: "FUSE init failed"

2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get: [root at ovirt share]# ls ls: reading directory .: Too many levels of symbolic links [root at ovirt share]# ls -fl ls: reading directory .: Too many levels of symbolic links total 3636 drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 .. drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096
2017 Dec 05
0
SAMBA VFS module for GlusterFS crashes
Keep in mind a local disk is 3,6,12 Gbps but a network connection is typically 1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet (especially using SAS drives). On December 5, 2017 6:11:38 AM EST, Riccardo Murri <riccardo.murri at uzh.ch> wrote: >Hello, > >I'm trying to set up a SAMBA server serving a GlusterFS volume. >Everything works fine if I locally
2017 Dec 06
0
SAMBA VFS module for GlusterFS crashes
On Tue, 2017-12-05 at 11:11 +0000, Riccardo Murri wrote: > Hello, > > I'm trying to set up a SAMBA server serving a GlusterFS volume. > Everything works fine if I locally mount the GlusterFS volume (`mount > -t glusterfs ...`) and then serve the mounted FS through SAMBA, but > the performance is slower by a 2x/3x compared to a SAMBA server with a > local ext4 filesystem.
2017 Dec 05
4
SAMBA VFS module for GlusterFS crashes
Hello, I'm trying to set up a SAMBA server serving a GlusterFS volume. Everything works fine if I locally mount the GlusterFS volume (`mount -t glusterfs ...`) and then serve the mounted FS through SAMBA, but the performance is slower by a 2x/3x compared to a SAMBA server with a local ext4 filesystem. I gather that SAMBA vfs_glusterfs module can give better performance. However, as soon as I
2018 Apr 10
0
Gluster cluster on two networks
Hi all! I have setup a replicated/distributed gluster cluster 2 x (2 + 1). Centos 7 and gluster version 3.12.6 on server. All machines have two network interfaces and connected to two different networks, 10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6) 192.168.67.0/24 (with ldap, gluster version 3.13.1) Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2018 Apr 10
0
Gluster cluster on two networks
Marcus, Can you share server-side gluster peer probe and client-side mount command-lines. On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all! > > I have setup a replicated/distributed gluster cluster 2 x (2 + 1). > > Centos 7 and gluster version 3.12.6 on server. > > All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes, In first server (urd-gds-001): gluster peer probe urd-gds-000 gluster peer probe urd-gds-002 gluster peer probe urd-gds-003 gluster peer probe urd-gds-004 gluster pool list (from urd-gds-001): UUID Hostname State bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected 2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Feb 13
0
Failed to get quota limits
Hi, A part of the log won't be enough to debug the issue. Need the whole log messages till date. You can send it as attachments. Yes the quota.conf is a binary file. And I need the volume status output too. On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote: > Hi Hari, > Sorry for not providing you more details from the start. Here below you will > find all
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system. I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case. ===== Symptoms ===== - Error from test
2018 Jan 23
0
Understanding client logs
Marcus, Please paste the name-version-release of the primary glusterfs package on your system. If possible, also describe the typical workload that happens at the mount via the user application. On Tue, Jan 23, 2018 at 7:43 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I have problem pin pointing an error, that users of > my system experience processes that
2018 Apr 09
2
Gluster cluster on two networks
Hi all! I have setup a replicated/distributed gluster cluster 2 x (2 + 1). Centos 7 and gluster version 3.12.6 on server. All machines have two network interfaces and connected to two different networks, 10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6) 192.168.67.0/24 (with ldap, gluster version 3.13.1) Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Feb 13
0
Failed to get quota limits
Yes, I need the log files in that duration, the log rotated file after hitting the issue aren't necessary, but the ones before hitting the issues are needed (not just when you hit it, the ones even before you hit it). Yes, you have to do a stat from the client through fuse mount. On Tue, Feb 13, 2018 at 3:56 PM, mabi <mabi at protonmail.ch> wrote: > Thank you for your answer. This
2018 Feb 13
0
Failed to get quota limits
I tried to set the limits as you suggest by running the following command. $ sudo gluster volume quota myvolume limit-usage /directory 200GB volume quota : success but then when I list the quotas there is still nothing, so nothing really happened. I also tried to run stat on all directories which have a quota but nothing happened either. I will send you tomorrow all the other logfiles as
2018 Jan 23
2
Understanding client logs
Hi all, I have problem pin pointing an error, that users of my system experience processes that crash. The thing that have changed since the craches started is that I added a gluster cluster. Of cause the users start to attack my gluster cluster. I started looking at logs, starting from the client side. I just need help to understand how to read it in the right way. I can see that every ten
2018 Feb 13
2
Failed to get quota limits
Hi Hari, Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text? Regards, M. *** gluster volume info myvolume *** Volume Name: myvolume Type: Replicate Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 Status:
2013 May 13
0
Fwd: Seeing non-priv port + auth issue in the gluster brick log
Fwd.ing to Gluster users, in the hope that many more people can see this and hopefully can provide any clues thanx, deepak -------- Original Message -------- Subject: [Gluster-devel] Seeing non-priv port + auth issue in the gluster brick log Date: Sat, 11 May 2013 12:43:20 +0530 From: Deepak C Shetty <deepakcs at linux.vnet.ibm.com> Organization: IBM India Pvt. Ltd. To: Gluster
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis. The only two quota commands we use are the following: gluster volume quota myvolume limit-usage /directory 10GB gluster volume quota myvolume list basically to set a new quota or to list the current
2018 Feb 13
2
Failed to get quota limits
Were you able to set new limits after seeing this error? On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowtham at redhat.com> wrote: > Yes, I need the log files in that duration, the log rotated file after > hitting the > issue aren't necessary, but the ones before hitting the issues are needed > (not just when you hit it, the ones even before you hit it). > > Yes,
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi, I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster. Now I am getting permission denied errors and I see the following in the client logs: [2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [2011-02-24 09:59:11.851656] I