similar to: Fedora 17 GlusterFS 3.3.0 problmes

Displaying 20 results from an estimated 400 matches similar to: "Fedora 17 GlusterFS 3.3.0 problmes"

2017 Dec 05
4
SAMBA VFS module for GlusterFS crashes
Hello, I'm trying to set up a SAMBA server serving a GlusterFS volume. Everything works fine if I locally mount the GlusterFS volume (`mount -t glusterfs ...`) and then serve the mounted FS through SAMBA, but the performance is slower by a 2x/3x compared to a SAMBA server with a local ext4 filesystem. I gather that SAMBA vfs_glusterfs module can give better performance. However, as soon as I
2018 Apr 09
2
Gluster cluster on two networks
Hi all! I have setup a replicated/distributed gluster cluster 2 x (2 + 1). Centos 7 and gluster version 3.12.6 on server. All machines have two network interfaces and connected to two different networks, 10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6) 192.168.67.0/24 (with ldap, gluster version 3.13.1) Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus, Can you share server-side gluster peer probe and client-side mount command-lines. On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all! > > I have setup a replicated/distributed gluster cluster 2 x (2 + 1). > > Centos 7 and gluster version 3.12.6 on server. > > All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes, In first server (urd-gds-001): gluster peer probe urd-gds-000 gluster peer probe urd-gds-002 gluster peer probe urd-gds-003 gluster peer probe urd-gds-004 gluster pool list (from urd-gds-001): UUID Hostname State bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected 2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2018 Jan 23
2
Understanding client logs
Hi all, I have problem pin pointing an error, that users of my system experience processes that crash. The thing that have changed since the craches started is that I added a gluster cluster. Of cause the users start to attack my gluster cluster. I started looking at logs, starting from the client side. I just need help to understand how to read it in the right way. I can see that every ten
2018 Jan 23
0
Understanding client logs
Marcus, Please paste the name-version-release of the primary glusterfs package on your system. If possible, also describe the typical workload that happens at the mount via the user application. On Tue, Jan 23, 2018 at 7:43 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I have problem pin pointing an error, that users of > my system experience processes that
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2013 Mar 25
1
A problem when mount glusterfs via NFS
HI: I run glusterfs with four nodes, 2x2 Distributed-Replicate. I mounted it via fuse and did some test, it was ok. However when I mounted it via nfs, a problem was found: When I copied 200G files to the glusterfs, the glusterfs process in the server node(mounted by client) was killed because of OOM, and all terminals of the client were hung. Trying to test for many times, I got the
2017 Oct 17
2
Distribute rebalance issues
Hi, I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens a second or so later. Is this normal behaviour? So far it has been the same server and the same (remote)
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users: sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks a sudden power loss casued hard reboot - now the volume does not start Glusterfs- 3.3.1 on Centos 6.1 transport: TCP sharing volume over NFS for VM storage - VHD Files Type: distributed - only 1 node (brick) XFS (LVM)
2018 Feb 13
2
Failed to get quota limits
Hi Hari, Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text? Regards, M. *** gluster volume info myvolume *** Volume Name: myvolume Type: Replicate Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 Status:
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2018 Feb 13
0
Failed to get quota limits
Hi, A part of the log won't be enough to debug the issue. Need the whole log messages till date. You can send it as attachments. Yes the quota.conf is a binary file. And I need the volume status output too. On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote: > Hi Hari, > Sorry for not providing you more details from the start. Here below you will > find all
2018 Feb 12
0
Failed to get quota limits
Hi, Can you provide more information like, the volume configuration, quota.conf file and the log files. On Sat, Feb 10, 2018 at 1:05 AM, mabi <mabi at protonmail.ch> wrote: > Hello, > > I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota <volname> list" that my quotas on that volume are broken. The command returns no output and no errors
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis. The only two quota commands we use are the following: gluster volume quota myvolume limit-usage /directory 10GB gluster volume quota myvolume list basically to set a new quota or to list the current
2018 Feb 13
2
Failed to get quota limits
Were you able to set new limits after seeing this error? On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowtham at redhat.com> wrote: > Yes, I need the log files in that duration, the log rotated file after > hitting the > issue aren't necessary, but the ones before hitting the issues are needed > (not just when you hit it, the ones even before you hit it). > > Yes,
2012 Jan 04
0
FUSE init failed
Hi, I'm having an issue using the GlusterFS native client. After doing a mount the filesystem appears mounted but any operation results in a Transport endpoint is not connected message gluster peer status and volume info don't complain. I've copied the mount log below which mentions an error at fuse_init. The kernel is based on 2.6.15 and FUSE api version is 7.3. I'm using
2018 Feb 09
3
Failed to get quota limits
Hello, I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota <volname> list" that my quotas on that volume are broken. The command returns no output and no errors but by looking in /var/log/glusterfs.cli I found the following errors: [2018-02-09 19:31:24.242324] E [cli-cmd-volume.c:1674:cli_cmd_quota_handle_list_all] 0-cli: Failed to get quota limits for