Displaying 20 results from an estimated 22 matches for "namelookup".
2017 Sep 21
2
Performance drop from 3.8 to 3.10
...luster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
performance.readdir-ahead: on
performance.low-prio-threads: 32
user.cifs: off
performance.flush-behind: on
server.event-threads: 4
client.event-threads: 4
server.allow-ins...
2013 Mar 16
1
different size of nodes
...546-cc2e-4a27-a448-17befda04726
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gl0:/mnt/brick1/export
Brick2: gl1:/mnt/brick1/export
Brick3: gl2:/mnt/brick1/export
Brick4: gl3:/mnt/brick1/export
Brick5: gl4:/mnt/brick1/export
Options Reconfigured:
nfs.mount-udp: on
nfs.addr-namelookup: off
nfs.ports-insecure: on
nfs.port: 2049
cluster.stripe-coalesce: on
nfs.disable: off
performance.flush-behind: on
performance.io-thread-count: 64
performance.quick-read: on
performance.stat-prefetch: on
performance.io-cache: on
performance.write-behind: on
performance.read-ahead: on
performance....
2018 Apr 04
0
Invisible files and directories
...fs/bricks/DATA105/data
> Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data
> Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data
> Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data
> Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data
> Options Reconfigured:
> nfs.addr-namelookup: off
> transport.address-family: inet
> nfs.disable: on
> diagnostics.brick-log-level: ERROR
> performance.readdir-ahead: on
> auth.allow: $IP RANGE
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on
We had a scheduled reboot yesterday.
Kind r...
2011 Jul 10
0
glusterfs for Xen Server VM Images
...cks: 2
Transport-type: tcp
Bricks:
Brick1: scluster01.corp.assureprograms.com.au:/gluster-export/storage-pool-01
Brick2: scluster02.corp.assureprograms.com.au:/gluster-export/storage-pool-01
Options Reconfigured:
nfs.volume-access: read-write
auth.allow: 192.168.10.*
nfs.ports-insecure: on
nfs.addr-namelookup: on
server.allow-insecure: on
nfs.export-volumes: on
As you can see above we have applied some volume configuration changes like "nfs.volume-access: read-write". Does anyone have any suggested configurations changes that can be made to glusterfs to improve performance of the I/O in this...
2012 Dec 20
0
nfs.export-dirs
...4ab958-1599-4e82-9358-1eea282d4025
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: tipper:/mnt/brick1
Options Reconfigured:
nfs.export-dirs: on
nfs.export-volumes: off
nfs.export-dir: /install
nfs.port: 2049
nfs.ports-insecure: off
nfs.disable: off
nfs.mount-udp: on
nfs.addr-namelookup: off
nfs.register-with-portmap: on
# showmount -e localhost
Export list for localhost:
/data/install *
# gluster volume set data nfs.export-dir /install/iso
Set volume successful
# showmount -e localhost
Export list for localhost:
/data/install/iso *
# gluster --version
glusterfs 3.3.1 built o...
2017 Sep 22
0
Performance drop from 3.8 to 3.10
...d-block-size: 64MB
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.stat-prefetch: on
> performance.strict-write-ordering: off
> nfs.enable-ino32: off
> nfs.addr-namelookup: off
> nfs.disable: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> features.shard: on
> cluster.data-self-heal: on
> performance.readdir-ahead: on
> performance.low-prio-threads: 32
> user.cifs: off
> performance.flush-behind: on
> server.event-t...
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
...cks: 1
Brick2: 10.10.60.169:/exports/brick-cold/tiered-sharded-vol
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.shard: on
features.shard-block-size: 64MB
server.allow-insecure: on
performance.quick-read: off
performance.stat-prefetch: off
nfs.disable: on
nfs.addr-namelookup: off
performance.readdir-ahead: on
snap-activate-on-create: enable
cluster.enable-shared-storage: disable
~ Jeff Byers ~
2018 Apr 04
2
Invisible files and directories
Right now the volume is running with
readdir-optimize off
parallel-readdir off
On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Serg,
>
> Do you mean that turning off readdir-optimize did not work? Or did you
> mean turning off parallel-readdir did not work?
>
>
>
> On 4 April 2018 at 10:48, Serg Gulko <s.gulko at
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users:
sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks
a sudden power loss casued hard reboot - now the volume does not start
Glusterfs- 3.3.1 on Centos 6.1 transport: TCP
sharing volume over NFS for VM storage - VHD Files
Type: distributed - only 1 node (brick)
XFS (LVM)
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
...luster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
performance.readdir-ahead: on
performance.low-prio-threads: 32
user.cifs: off
performance.flush-behind: on
--
Lindsay
-------------- next part --------------
An HTML a...
2018 Apr 23
0
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi,
What is the output of 'gluster volume info' for this volume?
Regards,
Nithya
On 23 April 2018 at 18:52, Frank Ruehlemann <ruehlemann at itsc.uni-luebeck.de>
wrote:
> Hi,
>
> after 2 years running GlusterFS without bigger problems we're facing
> some strange errors lately.
>
> After updating to 3.12.7 some user reported at least 4 broken
> directories
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...ad: disable
performance.readdir-ahead: on
performance.io-thread-count: 64
performance.io-cache: on
performance.client-io-threads: on
server.outstanding-rpc-limit: 128
server.event-threads: 3
client.event-threads: 3
performance.cache-size: 32MB
transport.address-family: inet
nfs.disable: on
nfs.addr-namelookup: off
nfs.export-volumes: on
nfs.rpc-auth-allow: 192.168.140.*
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-samba-metadata: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 100...
2018 May 23
0
cluster brick logs filling after upgrade from 3.6 to 3.12
...no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...formance.io-thread-count: 64
> performance.io-cache: on
> performance.client-io-threads: on
> server.outstanding-rpc-limit: 128
> server.event-threads: 3
> client.event-threads: 3
> performance.cache-size: 32MB
> transport.address-family: inet
> nfs.disable: on
> nfs.addr-namelookup: off
> nfs.export-volumes: on
> nfs.rpc-auth-allow: 192.168.140.*
> features.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> performance.stat-prefetch: on
> performance.cache-samba-metadata: on
> performance.cache-invalidation: on
> performance.md-cache-t...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...nce.readdir-ahead: on
performance.io-thread-count: 64
performance.io-cache: on
performance.client-io-threads: on
server.outstanding-rpc-limit: 128
server.event-threads: 3
client.event-threads: 3
performance.cache-size: 32MB
transport.address-family: inet
nfs.disable: on
nfs.addr-namelookup: off
nfs.export-volumes: on
nfs.rpc-auth-allow: 192.168.140.*
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-samba-metadata: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.in...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...gt; performance.client-io-threads: on
>> server.outstanding-rpc-limit: 128
>> server.event-threads: 3
>> client.event-threads: 3
>> performance.cache-size: 32MB
>> transport.address-family: inet
>> nfs.disable: on
>> nfs.addr-namelookup: off
>> nfs.export-volumes: on
>> nfs.rpc-auth-allow: 192.168.140.*
>> features.cache-invalidation: on
>> features.cache-invalidation-timeout: 600
>> performance.stat-prefetch: on
>> performance.cache-samba-metadata: on
>> perfor...
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi,
after 2 years running GlusterFS without bigger problems we're facing
some strange errors lately.
After updating to 3.12.7 some user reported at least 4 broken
directories with some invisible files. The files are at the bricks and
don't start with a dot, but aren't visible in "ls". Clients still can
interact with them by using the explicit path.
More information:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:
?
?
root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all,
?
?
One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :)
?
Initial cluster was created like this:
?
gluster volume create www replica 3 transport tcp
2017 Sep 29
2
nfs-ganesha locking problems
...(null)
debug.random-failure off
debug.error-fops (null)
nfs.enable-ino32 no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16
nfs.port 2049
nfs.rpc-auth-unix on
nfs.rpc-auth-null on
nfs.rpc-auth-a...