Displaying 20 results from an estimated 1000 matches similar to: "Understanding client logs"
2018 Jan 23
0
Understanding client logs
Marcus,
Please paste the name-version-release of the primary glusterfs package on
your system.
If possible, also describe the typical workload that happens at the mount
via the user application.
On Tue, Jan 23, 2018 at 7:43 PM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all,
> I have problem pin pointing an error, that users of
> my system experience processes that
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2018 Feb 13
2
Failed to get quota limits
Hi Hari,
Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text?
Regards,
M.
*** gluster volume info myvolume ***
Volume Name: myvolume
Type: Replicate
Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5
Status:
2018 Feb 13
0
Failed to get quota limits
Hi,
A part of the log won't be enough to debug the issue.
Need the whole log messages till date.
You can send it as attachments.
Yes the quota.conf is a binary file.
And I need the volume status output too.
On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote:
> Hi Hari,
> Sorry for not providing you more details from the start. Here below you will
> find all
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis.
The only two quota commands we use are the following:
gluster volume quota myvolume limit-usage /directory 10GB
gluster volume quota myvolume list
basically to set a new quota or to list the current
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
2018 Feb 13
2
Failed to get quota limits
Were you able to set new limits after seeing this error?
On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowtham at redhat.com> wrote:
> Yes, I need the log files in that duration, the log rotated file after
> hitting the
> issue aren't necessary, but the ones before hitting the issues are needed
> (not just when you hit it, the ones even before you hit it).
>
> Yes,
2018 Feb 12
0
Failed to get quota limits
Hi,
Can you provide more information like, the volume configuration, quota.conf
file and the log files.
On Sat, Feb 10, 2018 at 1:05 AM, mabi <mabi at protonmail.ch> wrote:
> Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
quota <volname> list" that my quotas on that volume are broken. The command
returns no output and no errors
2018 Feb 13
0
Failed to get quota limits
Yes, I need the log files in that duration, the log rotated file after
hitting the
issue aren't necessary, but the ones before hitting the issues are needed
(not just when you hit it, the ones even before you hit it).
Yes, you have to do a stat from the client through fuse mount.
On Tue, Feb 13, 2018 at 3:56 PM, mabi <mabi at protonmail.ch> wrote:
> Thank you for your answer. This
2018 Feb 06
0
geo-replication
Hi again,
I made some more tests and the behavior I get is that if any of
the slaves are down the geo-replication stops working.
It this the way distributed volumes work, if one server goes down
the entire system stops to work?
The servers that are online do not continue to work?
Sorry, for asking stupid questions.
Best regards
Marcus
On Tue, Feb 06, 2018 at 12:09:40PM +0100, Marcus Peders?n
2018 Mar 02
1
geo-replication
Hi again,
I have been testing and reading up on other solutions
and just wanted to check if my ideas are ok.
I have been looking at dispersed volumes and wonder if there are any
problems running replicated-distributed cluster on the master node and
a dispersed-distributed cluster on the slave side of a geo-replication.
Second thought, running disperesed on both sides, is that a problem
(Master:
2018 Feb 06
4
geo-replication
Hi all,
I am planning my new gluster system and tested things out in
a bunch of virtual machines.
I need a bit of help to understand how geo-replication behaves.
I have a master gluster cluster replica 2
(in production I will use an arbiter and replicatied/distributed)
and the geo cluster is distributed with 2 machines.
(in production I will have the geo cluster distributed)
Everything is up
2018 Feb 13
0
Failed to get quota limits
I tried to set the limits as you suggest by running the following command.
$ sudo gluster volume quota myvolume limit-usage /directory 200GB
volume quota : success
but then when I list the quotas there is still nothing, so nothing really happened.
I also tried to run stat on all directories which have a quota but nothing happened either.
I will send you tomorrow all the other logfiles as
2017 Dec 05
4
SAMBA VFS module for GlusterFS crashes
Hello,
I'm trying to set up a SAMBA server serving a GlusterFS volume.
Everything works fine if I locally mount the GlusterFS volume (`mount
-t glusterfs ...`) and then serve the mounted FS through SAMBA, but
the performance is slower by a 2x/3x compared to a SAMBA server with a
local ext4 filesystem.
I gather that SAMBA vfs_glusterfs module can give better
performance. However, as soon as I
2018 Feb 09
3
Failed to get quota limits
Hello,
I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota <volname> list" that my quotas on that volume are broken. The command returns no output and no errors but by looking in /var/log/glusterfs.cli I found the following errors:
[2018-02-09 19:31:24.242324] E [cli-cmd-volume.c:1674:cli_cmd_quota_handle_list_all] 0-cli: Failed to get quota limits for