Displaying 20 results from an estimated 6000 matches similar to: "Gluster Documentation Feedback"
2017 Oct 16
0
Gluster CLI Feedback
Gentle reminder.
Thanks to those who have already responded.
Nithya
On 11 October 2017 at 14:38, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
> As part of our initiative to improve Gluster usability, we would like
> feedback on the current Gluster CLI. Gluster 4.0 upstream development is
> currently in progress and it is an ideal time to consider CLI changes.
2017 Oct 11
3
Gluster CLI Feedback
Hi,
As part of our initiative to improve Gluster usability, we would like
feedback on the current Gluster CLI. Gluster 4.0 upstream development is
currently in progress and it is an ideal time to consider CLI changes.
Answers to the following would be appreciated:
1. How often do you use the Gluster CLI? Is it a preferred method to
manage Gluster?
2. What operations do you commonly
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya,
I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran <nbalacha at redhat.com>
Date: Wednesday, January 31, 2018 at 11:26 AM
To: Eva Freer <freereb at ornl.gov>
Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer,
Our analysis is that this issue is caused by
https://review.gluster.org/17618. Specifically, in
'gd_set_shared_brick_count()' from
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c
.
But even if we fix it today, I don't think we have a release planned
immediately for shipping this. Are you planning to fix the code and
re-compile?
Regards,
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar,
Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release.
Thanks,
Eva (865) 574-6894
From: Amar Tumballi <atumball at redhat.com>
Date: Wednesday, January 31, 2018 at 12:15 PM
To: Eva Freer
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva,
I'm sorry but I need to get in touch with another developer to check about
the changes here and he will be available only tomorrow. Is there someone
else I could work with while you are away?
Regards,
Nithya
On 31 January 2018 at 22:00, Freer, Eva B. <freereb at ornl.gov> wrote:
> Nithya,
>
>
>
> I will be out of the office for ~10 days starting tomorrow. Is
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan,
I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better.
<snip>
> In the
> meantime can you remount glusterfs with options
> --entry-timeout=0 and
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi,
I think we have a workaround for until we have a fix in the code. The
following worked on my system.
Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
might need to create the *filter* directory in this path.)
Make sure the file has execute permissions. On my system:
[root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
[root at rhgsserver1 3.12.5]# l
total 4.0K
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran <nbalacha at redhat.com>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <freereb at ornl.gov>
Cc: "Greene, Tami McFarlin" <greenet at
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote:
> Hi Dan,
>
> It sounds like you might be running into [1]. The patch has been posted
> upstream and the fix should be in the next release.
> In the meantime, I'm afraid there is no way to get around this without
> restarting the process.
>
> Regards,
> Nithya
>
>
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:50, Freer, Eva B. <freereb at ornl.gov> wrote:
> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
That is not necessary. Let me get back to you on this tomorrow.
Regards,
Nithya
> Thanks,
>
> Eva (865) 574-6894
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
Nithya,
Responding to an earlier question: Before the upgrade, we were at 3.103 on these servers, but some of the clients were 3.7.6. From below, does this mean that ?shared-brick-count? needs to be set to 1 for all bricks.
All of the bricks are on separate xfs partitions composed hardware of RAID 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes was 5%. I changed it to
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:34, Freer, Eva B. <freereb at ornl.gov> wrote:
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that ?shared-brick-count? needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote:
>
>
> On 2/2/2018 2:13 AM, Nithya Balachandran wrote:
>> Hi Dan,
>>
>> It sounds like you might be running into [1]. The patch has been
>> posted upstream and the fix should be in the next release.
>> In the meantime, I'm afraid there is no way to get around this without
>> restarting the process.
>>
2018 Feb 02
3
Run away memory with gluster mount
Hi Dan,
It sounds like you might be running into [1]. The patch has been posted
upstream and the fix should be in the next release.
In the meantime, I'm afraid there is no way to get around this without
restarting the process.
Regards,
Nithya
[1]https://bugzilla.redhat.com/show_bug.cgi?id=1541264
On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com> wrote:
>
>
2017 Jul 13
2
Rebalance task fails
Hi Nithya,
I see index in context:
[2017-07-07 10:07:18.230202] E [MSGID: 106062]
[glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict]
0-glusterd: failed to get index
I wonder if there is anything I can do to fix it.
I was trying to strace gluster process but still have no clue what
exactly is gluster index.
Best regards,
Szymon Miotk
On Thu, Jul 13, 2017 at 10:12 AM, Nithya
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount.
In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues.
As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this
would have fix .
----
Ashish
----- Original
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote:
>
> I remember we have fixed 2 issues where such kind of error messages were
> coming and also we were seeing issues on mount.
> In one of the case the problem was in dht. Unfortunately, I don't
> remember the BZ's for those issues.
>
I think the DHT BZ you are referring to 1438423
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
Hi Eva,
One more question. What version of gluster were you running before the
upgrade?
Thanks,
Nithya
On 31 January 2018 at 09:52, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Eva,
>
> Can you send us the following:
>
> gluster volume info
> gluster volume status
>
> The log files and tcpdump for df on a fresh mount point for that volume.
>
>