Dear Users, I?m facing with a new problem on our gluster volume (v. 3.12.14). Sometime it happen that ?ls? command execution, in a specified directory, return empty output. ?ls? command output is empty, but I know that the involved directory contains some files and subdirectories. In fact, if I try to execute ?ls? command against a specified file (in the same folder) I can see that the file is there. In a few words: ?ls" command output executed in a particular folder is empty; "ls filename? command output executed in the same folder is ok. There is something I can do in order to identify the cause of this issue? You can find below some information about the volume. Thank you in advance, Mauro Tridici [root at s01 ~]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 12 x (4 + 2) = 72 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2: s02-stg:/gluster/mnt1/brick Brick3: s03-stg:/gluster/mnt1/brick Brick4: s01-stg:/gluster/mnt2/brick Brick5: s02-stg:/gluster/mnt2/brick Brick6: s03-stg:/gluster/mnt2/brick Brick7: s01-stg:/gluster/mnt3/brick Brick8: s02-stg:/gluster/mnt3/brick Brick9: s03-stg:/gluster/mnt3/brick Brick10: s01-stg:/gluster/mnt4/brick Brick11: s02-stg:/gluster/mnt4/brick Brick12: s03-stg:/gluster/mnt4/brick Brick13: s01-stg:/gluster/mnt5/brick Brick14: s02-stg:/gluster/mnt5/brick Brick15: s03-stg:/gluster/mnt5/brick Brick16: s01-stg:/gluster/mnt6/brick Brick17: s02-stg:/gluster/mnt6/brick Brick18: s03-stg:/gluster/mnt6/brick Brick19: s01-stg:/gluster/mnt7/brick Brick20: s02-stg:/gluster/mnt7/brick Brick21: s03-stg:/gluster/mnt7/brick Brick22: s01-stg:/gluster/mnt8/brick Brick23: s02-stg:/gluster/mnt8/brick Brick24: s03-stg:/gluster/mnt8/brick Brick25: s01-stg:/gluster/mnt9/brick Brick26: s02-stg:/gluster/mnt9/brick Brick27: s03-stg:/gluster/mnt9/brick Brick28: s01-stg:/gluster/mnt10/brick Brick29: s02-stg:/gluster/mnt10/brick Brick30: s03-stg:/gluster/mnt10/brick Brick31: s01-stg:/gluster/mnt11/brick Brick32: s02-stg:/gluster/mnt11/brick Brick33: s03-stg:/gluster/mnt11/brick Brick34: s01-stg:/gluster/mnt12/brick Brick35: s02-stg:/gluster/mnt12/brick Brick36: s03-stg:/gluster/mnt12/brick Brick37: s04-stg:/gluster/mnt1/brick Brick38: s05-stg:/gluster/mnt1/brick Brick39: s06-stg:/gluster/mnt1/brick Brick40: s04-stg:/gluster/mnt2/brick Brick41: s05-stg:/gluster/mnt2/brick Brick42: s06-stg:/gluster/mnt2/brick Brick43: s04-stg:/gluster/mnt3/brick Brick44: s05-stg:/gluster/mnt3/brick Brick45: s06-stg:/gluster/mnt3/brick Brick46: s04-stg:/gluster/mnt4/brick Brick47: s05-stg:/gluster/mnt4/brick Brick48: s06-stg:/gluster/mnt4/brick Brick49: s04-stg:/gluster/mnt5/brick Brick50: s05-stg:/gluster/mnt5/brick Brick51: s06-stg:/gluster/mnt5/brick Brick52: s04-stg:/gluster/mnt6/brick Brick53: s05-stg:/gluster/mnt6/brick Brick54: s06-stg:/gluster/mnt6/brick Brick55: s04-stg:/gluster/mnt7/brick Brick56: s05-stg:/gluster/mnt7/brick Brick57: s06-stg:/gluster/mnt7/brick Brick58: s04-stg:/gluster/mnt8/brick Brick59: s05-stg:/gluster/mnt8/brick Brick60: s06-stg:/gluster/mnt8/brick Brick61: s04-stg:/gluster/mnt9/brick Brick62: s05-stg:/gluster/mnt9/brick Brick63: s06-stg:/gluster/mnt9/brick Brick64: s04-stg:/gluster/mnt10/brick Brick65: s05-stg:/gluster/mnt10/brick Brick66: s06-stg:/gluster/mnt10/brick Brick67: s04-stg:/gluster/mnt11/brick Brick68: s05-stg:/gluster/mnt11/brick Brick69: s06-stg:/gluster/mnt11/brick Brick70: s04-stg:/gluster/mnt12/brick Brick71: s05-stg:/gluster/mnt12/brick Brick72: s06-stg:/gluster/mnt12/brick Options Reconfigured: disperse.eager-lock: off diagnostics.count-fop-hits: on diagnostics.latency-measurement: on cluster.server-quorum-type: server features.default-soft-limit: 90 features.quota-deem-statfs: on performance.io-thread-count: 16 disperse.cpu-extensions: auto performance.io-cache: off network.inode-lru-limit: 50000 performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on cluster.readdir-optimize: on performance.parallel-readdir: off performance.readdir-ahead: on cluster.lookup-optimize: on client.event-threads: 4 server.event-threads: 4 nfs.disable: on transport.address-family: inet cluster.quorum-type: none cluster.min-free-disk: 10 performance.client-io-threads: on features.quota: on features.inode-quota: on features.bitrot: on features.scrub: Active network.ping-timeout: 0 cluster.brick-multiplex: off cluster.server-quorum-ratio: 51 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190118/3bf70b29/attachment.html>
Nithya Balachandran
2019-Jan-18 09:14 UTC
[Gluster-users] invisible files in some directory
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici <mauro.tridici at cmcc.it> wrote:> Dear Users, > > I?m facing with a new problem on our gluster volume (v. 3.12.14). > Sometime it happen that ?ls? command execution, in a specified directory, > return empty output. > ?ls? command output is empty, but I know that the involved directory > contains some files and subdirectories. > In fact, if I try to execute ?ls? command against a specified file (in the > same folder) I can see that the file is there. > > In a few words: > > ?ls" command output executed in a particular folder is empty; > "ls filename? command output executed in the same folder is ok. > > There is something I can do in order to identify the cause of this issue? > >Yes, please take a tcpdump of the client when running the ls on the problematic directory and send it across. tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 We have seen such issues when the gfid handle for the directory is missing on the bricks. Regards, Nithya> You can find below some information about the volume. > Thank you in advance, > Mauro Tridici > > [root at s01 ~]# gluster volume info > > > Volume Name: tier2 > Type: Distributed-Disperse > Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c > Status: Started > Snapshot Count: 0 > Number of Bricks: 12 x (4 + 2) = 72 > Transport-type: tcp > Bricks: > Brick1: s01-stg:/gluster/mnt1/brick > Brick2: s02-stg:/gluster/mnt1/brick > Brick3: s03-stg:/gluster/mnt1/brick > Brick4: s01-stg:/gluster/mnt2/brick > Brick5: s02-stg:/gluster/mnt2/brick > Brick6: s03-stg:/gluster/mnt2/brick > Brick7: s01-stg:/gluster/mnt3/brick > Brick8: s02-stg:/gluster/mnt3/brick > Brick9: s03-stg:/gluster/mnt3/brick > Brick10: s01-stg:/gluster/mnt4/brick > Brick11: s02-stg:/gluster/mnt4/brick > Brick12: s03-stg:/gluster/mnt4/brick > Brick13: s01-stg:/gluster/mnt5/brick > Brick14: s02-stg:/gluster/mnt5/brick > Brick15: s03-stg:/gluster/mnt5/brick > Brick16: s01-stg:/gluster/mnt6/brick > Brick17: s02-stg:/gluster/mnt6/brick > Brick18: s03-stg:/gluster/mnt6/brick > Brick19: s01-stg:/gluster/mnt7/brick > Brick20: s02-stg:/gluster/mnt7/brick > Brick21: s03-stg:/gluster/mnt7/brick > Brick22: s01-stg:/gluster/mnt8/brick > Brick23: s02-stg:/gluster/mnt8/brick > Brick24: s03-stg:/gluster/mnt8/brick > Brick25: s01-stg:/gluster/mnt9/brick > Brick26: s02-stg:/gluster/mnt9/brick > Brick27: s03-stg:/gluster/mnt9/brick > Brick28: s01-stg:/gluster/mnt10/brick > Brick29: s02-stg:/gluster/mnt10/brick > Brick30: s03-stg:/gluster/mnt10/brick > Brick31: s01-stg:/gluster/mnt11/brick > Brick32: s02-stg:/gluster/mnt11/brick > Brick33: s03-stg:/gluster/mnt11/brick > Brick34: s01-stg:/gluster/mnt12/brick > Brick35: s02-stg:/gluster/mnt12/brick > Brick36: s03-stg:/gluster/mnt12/brick > Brick37: s04-stg:/gluster/mnt1/brick > Brick38: s05-stg:/gluster/mnt1/brick > Brick39: s06-stg:/gluster/mnt1/brick > Brick40: s04-stg:/gluster/mnt2/brick > Brick41: s05-stg:/gluster/mnt2/brick > Brick42: s06-stg:/gluster/mnt2/brick > Brick43: s04-stg:/gluster/mnt3/brick > Brick44: s05-stg:/gluster/mnt3/brick > Brick45: s06-stg:/gluster/mnt3/brick > Brick46: s04-stg:/gluster/mnt4/brick > Brick47: s05-stg:/gluster/mnt4/brick > Brick48: s06-stg:/gluster/mnt4/brick > Brick49: s04-stg:/gluster/mnt5/brick > Brick50: s05-stg:/gluster/mnt5/brick > Brick51: s06-stg:/gluster/mnt5/brick > Brick52: s04-stg:/gluster/mnt6/brick > Brick53: s05-stg:/gluster/mnt6/brick > Brick54: s06-stg:/gluster/mnt6/brick > Brick55: s04-stg:/gluster/mnt7/brick > Brick56: s05-stg:/gluster/mnt7/brick > Brick57: s06-stg:/gluster/mnt7/brick > Brick58: s04-stg:/gluster/mnt8/brick > Brick59: s05-stg:/gluster/mnt8/brick > Brick60: s06-stg:/gluster/mnt8/brick > Brick61: s04-stg:/gluster/mnt9/brick > Brick62: s05-stg:/gluster/mnt9/brick > Brick63: s06-stg:/gluster/mnt9/brick > Brick64: s04-stg:/gluster/mnt10/brick > Brick65: s05-stg:/gluster/mnt10/brick > Brick66: s06-stg:/gluster/mnt10/brick > Brick67: s04-stg:/gluster/mnt11/brick > Brick68: s05-stg:/gluster/mnt11/brick > Brick69: s06-stg:/gluster/mnt11/brick > Brick70: s04-stg:/gluster/mnt12/brick > Brick71: s05-stg:/gluster/mnt12/brick > Brick72: s06-stg:/gluster/mnt12/brick > Options Reconfigured: > disperse.eager-lock: off > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > cluster.server-quorum-type: server > features.default-soft-limit: 90 > features.quota-deem-statfs: on > performance.io-thread-count: 16 > disperse.cpu-extensions: auto > performance.io-cache: off > network.inode-lru-limit: 50000 > performance.md-cache-timeout: 600 > performance.cache-invalidation: on > performance.stat-prefetch: on > features.cache-invalidation-timeout: 600 > features.cache-invalidation: on > cluster.readdir-optimize: on > performance.parallel-readdir: off > performance.readdir-ahead: on > cluster.lookup-optimize: on > client.event-threads: 4 > server.event-threads: 4 > nfs.disable: on > transport.address-family: inet > cluster.quorum-type: none > cluster.min-free-disk: 10 > performance.client-io-threads: on > features.quota: on > features.inode-quota: on > features.bitrot: on > features.scrub: Active > network.ping-timeout: 0 > cluster.brick-multiplex: off > cluster.server-quorum-ratio: 51 > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190118/204e8190/attachment.html>