search for: readdirp

Displaying 20 results from an estimated 65 matches for "readdirp".

Did you mean: readdir
2017 Aug 23
2
Turn off readdirp
Hi How can I turn off readdirp? I saw some solution but they don't work. I tried server side gluster volume vol performance.force-readdirp off gluster volume vol dht.force-readdirp off gluster volume vol perforamnce.read-ahead off and in client side fuse mount mount -t glusterfs server:/vol /mnt -o user.readdirp=no but I...
2017 Aug 23
0
Turn off readdirp
On Wed, Aug 23, 2017 at 08:06:18PM +0430, Tahereh Fattahi wrote: > Hi > How can I turn off readdirp? > I saw some solution but they don't work. > > I tried server side > gluster volume vol performance.force-readdirp off > gluster volume vol dht.force-readdirp off > gluster volume vol perforamnce.read-ahead off > and in client side fuse mount > mount -t glusterfs serv...
2018 May 30
0
RDMA inline threshold?
...ar Dan, thanks for the quick reply! I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding). I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change anything. I found to options I might try: (gluster volume get myvolumename all | grep readdirp ) performance.force-readdirp true dht.force-readdirp on Can I turn off...
2018 May 30
2
RDMA inline threshold?
...ansport endpoint not connected" error. The effect is, that "ls" >> only shows some files, but not all. >> >> The respective log file shows this error message: >> >> [2018-05-20 20:38:25.114978] W [MSGID: 114031] >> [client-rpc-fops.c:2578:client3_3_readdirp_cbk] 0-glurch-client-0: >> remote operation failed [Transport endpoint is not connected] >> [2018-05-20 20:38:27.732796] W [MSGID: 103046] >> [rdma.c:4089:gf_rdma_process_recv] 0-rpc-transport/rdma: peer ( >> 10.100.245.18:49153), couldn't encode or decode the msg proper...
2018 Jan 07
0
performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7
...it will show files fine it shows folders fine with ls on bricks what am I missing? maybe some settings are incompatible guess over-tuning happened vm1:/t1 /home/t1 glusterfs defaults,_netdev,backupvolfile-server=vm2,attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,fetch-attempts=5 0 0 glusterfs.x86_64 3.13.1-1.el7 installed glusterfs-api.x86_64 3.13.1-1.el7 installed glusterfs-cli.x86_64 3.13.1-1.el7 installed glusterfs-client-xlators.x86_64 3.1...
2017 Dec 29
1
A Problem of readdir-optimize
...sterFS. Please see the below information. $gluster v info vol Volume Name: vol Type: Distributed-Replicate Volume ID: d59bd014-3b8b-411a-8587-ee36d254f755 Status: Started Snapshot Count: 0 Number of Bricks: 90 x 2 = 180 Transport-type: tcp,rdma Bricks: ... Options Reconfigured: performance.force-readdirp: false dht.force-readdirp: off performance.read-ahead: on performance.client-io-threads: on diagnostics.client-sys-log-level: CRITICAL cluster.entry-self-heal: on cluster.metadata-self-heal: on cluster.data-self-heal: on cluster.self-heal-daemon: enable performance.readdir-ahead: on diagnostics.cli...
2018 Apr 30
2
Gluster rebalance taking many years
2018 Apr 30
0
Gluster rebalance taking many years
...to complete : 9919:44:34 > volume rebalance: web: success > > the rebalance log > [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running > /usr/sbin/glusterfs version 3.12.8 (args: /usr/sbin/glusterfs -s localhost > --volfile-id rebalance/web --xlator-option *dht.use-readdirp=yes > --xlator-option *dht.lookup-unhashed=yes --xlator-option > *dht.assert-no-child-down=yes --xlator-option > *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off > --xlator-option *replicate*.entry-self-heal=off --xlator-option > *dht.readdir-optimize...
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
...s at hosted-power.com> Sent:Tue 11-07-2017 18:48 Subject:RE: [Gluster-users] Gluster native mount is really slow compared to nfs CC:gluster-users at gluster.org; To:Vijay Bellur <vbellur at redhat.com>; PS: I just tested between these 2: ? mount -t glusterfs -o negative-timeout=1,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?So it means only 1 second negative timeout... ?In this particular test:?./smallfile_cli.py ?--to...
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
..., 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com <mailto:jo.goossens at hosted-power.com> > wrote: Hello Joe, ? ? I just did a mount like this (added the bold): ? mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?Results: ? root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64 smallfile version 3.0 ? ? ? ? ? ? ? ? ? ? ? ? ?...
2017 Dec 28
0
A Problem of readdir-optimize
Hi Paul, A few questions: What type of volume is this and what client protocol are you using? What version of Gluster are you using? Regards, Nithya On 28 December 2017 at 20:09, Paul <flypen at gmail.com> wrote: > Hi, All, > > If I set cluster.readdir-optimize to on, the performance of "ls" is > better, but I find one problem. > > # ls > # ls > files.1
2018 May 29
2
RDMA inline threshold?
..."ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all. The respective log file shows this error message: [2018-05-20 20:38:25.114978] W [MSGID: 114031] [client-rpc-fops.c:2578:client3_3_readdirp_cbk] 0-glurch-client-0: remote operation failed [Transport endpoint is not connected] [2018-05-20 20:38:27.732796] W [MSGID: 103046] [rdma.c:4089:gf_rdma_process_recv] 0-rpc-transport/rdma: peer (10.100.245.18:49153), couldn't encode or decode the msg properly or write chunks were not provided...
2018 Apr 30
1
Gluster rebalance taking many years
...44:34 >> volume rebalance: web: success >> >> the rebalance log >> [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running >> /usr/sbin/glusterfs version 3.12.8 (args: /usr/sbin/glusterfs -s localhost >> --volfile-id rebalance/web --xlator-option *dht.use-readdirp=yes >> --xlator-option *dht.lookup-unhashed=yes --xlator-option >> *dht.assert-no-child-down=yes --xlator-option >> *replicate*.data-self-heal=off --xlator-option >> *replicate*.metadata-self-heal=off --xlator-option >> *replicate*.entry-self-heal=off --xlator-option *...
2018 Apr 30
0
Gluster rebalance taking many years
...mated time left for rebalance to complete : 9919:44:34 volume rebalance: web: success the rebalance log [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.8 (args: /usr/sbin/glusterfs -s localhost --volfile-id rebalance/web --xlator-option *dht.use-readdirp=yes --xlator-option *dht.lookup-unhashed=yes --xlator-option *dht.assert-no-child-down=yes --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off --xlator-option *dht.readdir-optimize=on --xlator-option *dht....
2017 Dec 28
2
A Problem of readdir-optimize
Hi, All, If I set cluster.readdir-optimize to on, the performance of "ls" is better, but I find one problem. # ls # ls files.1 files.2 file.3 I run ls twice. At the first time, ls returns nothing. At the second time, ls returns all file names. If turn off cluster.readdir-optimize, I don't see this problem. Is there a way to solve this problem? If ls doesn't return the
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...0 us 0.00 us 3 > FORGET > 0.00 0.00 us 0.00 us 0.00 us 1227 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1035 > RELEASEDIR > 0.00 827.00 us 619.00 us 1199.00 us 10 > READDIRP > 0.00 98.38 us 30.00 us 535.00 us 144 > ENTRYLK > 0.00 180.11 us 121.00 us 257.00 us 94 > REMOVEXATTR > 0.00 208.03 us 23.00 us 1980.00 us 182 > GETXATTR > 0.00 1212.71 us...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...3 >> FORGET >> 0.00 0.00 us 0.00 us 0.00 us 1227 >> RELEASE >> 0.00 0.00 us 0.00 us 0.00 us 1035 >> RELEASEDIR >> 0.00 827.00 us 619.00 us 1199.00 us 10 >> READDIRP >> 0.00 98.38 us 30.00 us 535.00 us 144 >> ENTRYLK >> 0.00 180.11 us 121.00 us 257.00 us 94 >> REMOVEXATTR >> 0.00 208.03 us 23.00 us 1980.00 us 182 >> GETXATTR >>...
2017 Jul 11
2
Extremely slow du
...2017 at 4:57 PM, Vijay Bellur <vbellur at redhat.com > <mailto:vbellur at redhat.com>> wrote: > > Hi Mohammad, > > A lot of time is being spent in addressing metadata calls as > expected. Can you consider testing out with 3.11 with md-cache [1] > and readdirp [2] improvements? > > Adding Poornima and Raghavendra who worked on these enhancements to > help out further. > > Thanks, > Vijay > > [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ > <https://gluster.readthedocs.io/en/latest/r...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...t;>> 0.00 0.00 us 0.00 us 0.00 us 1227 >>> RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us 1035 >>> RELEASEDIR >>> 0.00 827.00 us 619.00 us 1199.00 us 10 >>> READDIRP >>> 0.00 98.38 us 30.00 us 535.00 us 144 >>> ENTRYLK >>> 0.00 180.11 us 121.00 us 257.00 us 94 >>> REMOVEXATTR >>> 0.00 208.03 us 23.00 us 1980.00 us 182 >>&...
2018 May 30
0
RDMA inline threshold?
...ectory > yields a "Transport endpoint not connected" error. The effect is, that "ls" > only shows some files, but not all. > > The respective log file shows this error message: > > [2018-05-20 20:38:25.114978] W [MSGID: 114031] [client-rpc-fops.c:2578:client3_3_readdirp_cbk] > 0-glurch-client-0: remote operation failed [Transport endpoint is not > connected] > [2018-05-20 20:38:27.732796] W [MSGID: 103046] > [rdma.c:4089:gf_rdma_process_recv] 0-rpc-transport/rdma: peer ( > 10.100.245.18:49153), couldn't encode or decode the msg properly or write...