Displaying 20 results from an estimated 6000 matches similar to: "SAMBA VFS module for GlusterFS crashes"
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Jan 23
2
Understanding client logs
Hi all,
I have problem pin pointing an error, that users of
my system experience processes that crash.
The thing that have changed since the craches started
is that I added a gluster cluster.
Of cause the users start to attack my gluster cluster.
I started looking at logs, starting from the client side.
I just need help to understand how to read it in the right way.
I can see that every ten
2018 Jan 23
0
Understanding client logs
Marcus,
Please paste the name-version-release of the primary glusterfs package on
your system.
If possible, also describe the typical workload that happens at the mount
via the user application.
On Tue, Jan 23, 2018 at 7:43 PM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all,
> I have problem pin pointing an error, that users of
> my system experience processes that
2018 Feb 13
2
Failed to get quota limits
Hi Hari,
Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text?
Regards,
M.
*** gluster volume info myvolume ***
Volume Name: myvolume
Type: Replicate
Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5
Status:
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis.
The only two quota commands we use are the following:
gluster volume quota myvolume limit-usage /directory 10GB
gluster volume quota myvolume list
basically to set a new quota or to list the current
2018 Feb 13
0
Failed to get quota limits
Hi,
A part of the log won't be enough to debug the issue.
Need the whole log messages till date.
You can send it as attachments.
Yes the quota.conf is a binary file.
And I need the volume status output too.
On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote:
> Hi Hari,
> Sorry for not providing you more details from the start. Here below you will
> find all
2018 Feb 13
2
Failed to get quota limits
Were you able to set new limits after seeing this error?
On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowtham at redhat.com> wrote:
> Yes, I need the log files in that duration, the log rotated file after
> hitting the
> issue aren't necessary, but the ones before hitting the issues are needed
> (not just when you hit it, the ones even before you hit it).
>
> Yes,
2018 Feb 12
0
Failed to get quota limits
Hi,
Can you provide more information like, the volume configuration, quota.conf
file and the log files.
On Sat, Feb 10, 2018 at 1:05 AM, mabi <mabi at protonmail.ch> wrote:
> Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
quota <volname> list" that my quotas on that volume are broken. The command
returns no output and no errors
2017 Dec 05
0
SAMBA VFS module for GlusterFS crashes
Keep in mind a local disk is 3,6,12 Gbps but a network connection is typically 1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet (especially using SAS drives).
On December 5, 2017 6:11:38 AM EST, Riccardo Murri <riccardo.murri at uzh.ch> wrote:
>Hello,
>
>I'm trying to set up a SAMBA server serving a GlusterFS volume.
>Everything works fine if I locally
2017 Dec 06
0
SAMBA VFS module for GlusterFS crashes
On Tue, 2017-12-05 at 11:11 +0000, Riccardo Murri wrote:
> Hello,
>
> I'm trying to set up a SAMBA server serving a GlusterFS volume.
> Everything works fine if I locally mount the GlusterFS volume (`mount
> -t glusterfs ...`) and then serve the mounted FS through SAMBA, but
> the performance is slower by a 2x/3x compared to a SAMBA server with a
> local ext4 filesystem.
2018 Feb 13
0
Failed to get quota limits
Yes, I need the log files in that duration, the log rotated file after
hitting the
issue aren't necessary, but the ones before hitting the issues are needed
(not just when you hit it, the ones even before you hit it).
Yes, you have to do a stat from the client through fuse mount.
On Tue, Feb 13, 2018 at 3:56 PM, mabi <mabi at protonmail.ch> wrote:
> Thank you for your answer. This
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2018 Feb 09
3
Failed to get quota limits
Hello,
I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota <volname> list" that my quotas on that volume are broken. The command returns no output and no errors but by looking in /var/log/glusterfs.cli I found the following errors:
[2018-02-09 19:31:24.242324] E [cli-cmd-volume.c:1674:cli_cmd_quota_handle_list_all] 0-cli: Failed to get quota limits for
2018 Feb 13
0
Failed to get quota limits
I tried to set the limits as you suggest by running the following command.
$ sudo gluster volume quota myvolume limit-usage /directory 200GB
volume quota : success
but then when I list the quotas there is still nothing, so nothing really happened.
I also tried to run stat on all directories which have a quota but nothing happened either.
I will send you tomorrow all the other logfiles as
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get:
[root at ovirt share]# ls
ls: reading directory .: Too many levels of symbolic links
[root at ovirt share]# ls -fl
ls: reading directory .: Too many levels of symbolic links
total 3636
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 ..
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens