Displaying 20 results from an estimated 900 matches similar to: "[Gluster-devel] cluster/dht: restrict migration of opened files"
2017 Sep 25
0
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
FYI - I've been testing the Gluster 3.12.1 packages with the help of the SIG maintainer and I can confirm that the logs are no longer being filled with NFS or null client errors after the upgrade.
--
Sam McLeod
@s_mcleod
https://smcleod.net
> On 18 Sep 2017, at 10:14 pm, Sam McLeod <mailinglists at smcleod.net> wrote:
>
> Thanks Milind,
>
> Yes I?m hanging out for
2017 Sep 18
2
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Thanks Milind,
Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :(
--
Sam
> On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com> wrote:
>
> Sam,
> You might want to give glusterfs-3.12.1 a try instead.
>
>
>
>> On Fri, Sep
2018 Jan 16
2
cluster/dht: restrict migration of opened files
All,
Patch [1] prevents migration of opened files during rebalance operation. If
patch [1] affects you, please voice out your concerns. [1] is a stop-gap
fix for the problem discussed in issues [2][3]
[1] https://review.gluster.org/#/c/19202/
[2] https://github.com/gluster/glusterfs/issues/308
[3] https://github.com/gluster/glusterfs/issues/347
regards,
Raghavendra
-------------- next part
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2017 Sep 03
0
Glusterd proccess hangs on reboot
----- Original Message -----
> From: "Milind Changire" <mchangir at redhat.com>
> To: "Serkan ?oban" <cobanserkan at gmail.com>
> Cc: "Gluster Users" <gluster-users at gluster.org>
> Sent: Saturday, September 2, 2017 11:44:40 PM
> Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot
>
> No worries Serkan,
> You can
2017 Sep 03
0
Glusterd proccess hangs on reboot
i usually change event threads to 4. But those logs are from a default
installation.
On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <bturner at redhat.com> wrote:
> ----- Original Message -----
>> From: "Ben Turner" <bturner at redhat.com>
>> To: "Serkan ?oban" <cobanserkan at gmail.com>
>> Cc: "Gluster Users" <gluster-users at
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Sep 03
2
Glusterd proccess hangs on reboot
----- Original Message -----
> From: "Ben Turner" <bturner at redhat.com>
> To: "Serkan ?oban" <cobanserkan at gmail.com>
> Cc: "Gluster Users" <gluster-users at gluster.org>
> Sent: Sunday, September 3, 2017 2:30:31 PM
> Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot
>
> ----- Original Message -----
> >
2018 Jan 18
1
[Gluster-devel] cluster/dht: restrict migration of opened files
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>
What is the impact on VM and gluster-block usecases after this patch? Will
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2013 Nov 22
0
[Announce] Samba 4.1.2 Available for Download
==================================================================
"My fake plants died because I did
not pretend to water them."
Mitch Hedberg
==================================================================
Release Announcements
---------------------
This is is the latest stable release of Samba 4.1.
Changes since 4.1.1:
--------------------
o Jeremy Allison
2013 Nov 22
0
[Announce] Samba 4.1.2 Available for Download
==================================================================
"My fake plants died because I did
not pretend to water them."
Mitch Hedberg
==================================================================
Release Announcements
---------------------
This is is the latest stable release of Samba 4.1.
Changes since 4.1.1:
--------------------
o Jeremy Allison
2017 Sep 03
2
Glusterd proccess hangs on reboot
No worries Serkan,
You can continue to use your 40 node clusters.
The backtrace has resolved the function names and it *should* be sufficient
to debug the issue.
Thanks for letting us know.
We'll post on this thread again to notify you about the findings.
On Sat, Sep 2, 2017 at 2:42 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi Milind,
>
> Anything new about the
2017 Sep 02
0
Glusterd proccess hangs on reboot
Hi Milind,
Anything new about the issue? Can you able to find the problem,
anything else you need?
I will continue with two clusters each 40 servers, so I will not be
able to provide any further info for 80 servers.
On Fri, Sep 1, 2017 at 10:30 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi,
> You can find pstack sampes here:
>
2017 Sep 04
0
Glusterd proccess hangs on reboot
>1. On 80 nodes cluster, did you reboot only one node or multiple ones?
Tried both, result is same, but the logs/stacks are from stopping and
starting glusterd only on one server while others are running.
>2. Are you sure that pstack output was always constantly pointing on strcmp being stuck?
It stays 70-80 minutes in %100 cpu consuming state, the stacks I send
is from first 5-10 minutes.
2017 Sep 04
0
Glusterd proccess hangs on reboot
I have been using a 60 server 1560 brick 3.7.11 cluster without
problems for 1 years. I did not see this problem with it.
Note that this problem does not happen when I install packages & start
glusterd & peer probe and create the volumes. But after glusterd
restart.
Also note that this still happens without any volumes. So it is not
related with brick count I think...
On Mon, Sep 4, 2017
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchangir at redhat.com> wrote:
> Serkan,
> I have gone through other mails in the mail thread as well but responding
> to this one specifically.
>
> Is this a source install or an RPM install ?
> If this is an RPM install, could you please install the
> glusterfs-debuginfo RPM and retry to capture the gdb backtrace.
>
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, Sep 4, 2017 at 5:28 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >1. On 80 nodes cluster, did you reboot only one node or multiple ones?
> Tried both, result is same, but the logs/stacks are from stopping and
> starting glusterd only on one server while others are running.
>
> >2. Are you sure that pstack output was always constantly pointing on
>
2017 Sep 01
2
Glusterd proccess hangs on reboot
Hi,
You can find pstack sampes here:
https://www.dropbox.com/s/6gw8b6tng8puiox/pstack_with_debuginfo.zip?dl=0
Here is the first one:
Thread 8 (Thread 0x7f92879ae700 (LWP 78909)):
#0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0
#1 0x000000310fe37d57 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0
#2 0x0000003d99c07aa1 in start_thread () from /lib64/libpthread.so.0
#3
2017 Sep 18
0
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Sam,
You might want to give glusterfs-3.12.1 a try instead.
On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:
> Howdy,
>
> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have
> having issues with glusterd.log and glustershd.log both being filled with
> errors relating to null client errors and client-callback