similar to: A Problem of readdir-optimize

Displaying 20 results from an estimated 1000 matches similar to: "A Problem of readdir-optimize"

2017 Dec 28
0
A Problem of readdir-optimize
Hi Paul, A few questions: What type of volume is this and what client protocol are you using? What version of Gluster are you using? Regards, Nithya On 28 December 2017 at 20:09, Paul <flypen at gmail.com> wrote: > Hi, All, > > If I set cluster.readdir-optimize to on, the performance of "ls" is > better, but I find one problem. > > # ls > # ls > files.1
2017 Dec 29
1
A Problem of readdir-optimize
Hi Nithya, GlusterFS version is 3.11.0, and we use the native client of GlusterFS. Please see the below information. $gluster v info vol Volume Name: vol Type: Distributed-Replicate Volume ID: d59bd014-3b8b-411a-8587-ee36d254f755 Status: Started Snapshot Count: 0 Number of Bricks: 90 x 2 = 180 Transport-type: tcp,rdma Bricks: ... Options Reconfigured: performance.force-readdirp: false
2018 Apr 30
2
Gluster rebalance taking many years
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally Through df -i I got the approximate number of files is 63694442 [root at CentOS-73-64-minimal ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 131981312 30901030 101080282 24% / devtmpfs 8192893 435 8192458 1% /dev tmpfs
2018 Apr 30
0
Gluster rebalance taking many years
Hi, This value is an ongoing rough estimate based on the amount of data rebalance has migrated since it started. The values will cange as the rebalance progresses. A few questions: 1. How many files/dirs do you have on this volume? 2. What is the average size of the files? 3. What is the total size of the data on the volume? Can you send us the rebalance log? Thanks, Nithya On 30
2018 Jan 15
1
"linkfile not having link" occurrs sometimes after renaming
There are two users u1 & u2 in the cluster. Some files are created by u1, and they are read only for u2. Of course u2 can read these files. Later these files are renamed by u1. Then I switch to the user u2. I find that u2 can't list or access the renamed files. I see these errors in log: [2018-01-15 17:35:05.133711] I [MSGID: 109045] [dht-common.c:2393:dht_lookup_cbk] 25-data-dht:
2017 Aug 23
2
Turn off readdirp
Hi How can I turn off readdirp? I saw some solution but they don't work. I tried server side gluster volume vol performance.force-readdirp off gluster volume vol dht.force-readdirp off gluster volume vol perforamnce.read-ahead off and in client side fuse mount mount -t glusterfs server:/vol /mnt -o user.readdirp=no but I can see just readdirp request in server. What shold I do?
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6. I updated my servers and clients to 3.12.4 and enabled these two options after reading about them in the 3.10.0 and 3.11.0 release notes. In the days after enabling these two options all of my
2017 Aug 23
0
Turn off readdirp
On Wed, Aug 23, 2017 at 08:06:18PM +0430, Tahereh Fattahi wrote: > Hi > How can I turn off readdirp? > I saw some solution but they don't work. > > I tried server side > gluster volume vol performance.force-readdirp off > gluster volume vol dht.force-readdirp off > gluster volume vol perforamnce.read-ahead off > and in client side fuse mount > mount -t glusterfs
2018 Jan 24
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding Poornima to take a look at it and comment. On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth <alan.orth at gmail.com> wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat Gluster Storage Server > documentation[2].
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote: > By the way, on a slightly related note, I'm pretty
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad, I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options: $ gluster volume set homes
2018 May 30
2
RDMA inline threshold?
Forgot to mention, sometimes I have to do force start other volumes as well, its hard to determine which brick process is locked up from the logs. Status of volume: rhev_vms_primary Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
2018 Jan 07
0
performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7
with performance.readdir-ahead on on the volume maked folders on mounts invisible to ls command but it will show files fine it shows folders fine with ls on bricks what am I missing? maybe some settings are incompatible guess over-tuning happened vm1:/t1 /home/t1 glusterfs
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya, I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:26 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar, Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release. Thanks, Eva (865) 574-6894 From: Amar Tumballi <atumball at redhat.com> Date: Wednesday, January 31, 2018 at 12:15 PM To: Eva Freer
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi, I think we have a workaround for until we have a fix in the code. The following worked on my system. Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You might need to create the *filter* directory in this path.) Make sure the file has execute permissions. On my system: [root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/ [root at rhgsserver1 3.12.5]# l total 4.0K
2018 May 30
0
RDMA inline threshold?
Dear Dan, thanks for the quick reply! I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding). I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer, Our analysis is that this issue is caused by https://review.gluster.org/17618. Specifically, in 'gd_set_shared_brick_count()' from https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c . But even if we fix it today, I don't think we have a release planned immediately for shipping this. Are you planning to fix the code and re-compile? Regards,
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:14 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at