Good morning to all the community. I have a problem and I would like to ask for your kind help.? It happens that newly written files from an MPI-application that uses InfiniBand disappear from glusterfs but they are present in a brick...? Please, can anyone help me to solve this problem?
I add some information about my cluster: [root at wn001 glusterfs]# gluster volume info ? Volume Name: scratch Type: Distribute Volume ID: fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e Status: Started Number of Bricks: 32 Transport-type: rdma Bricks: Brick1: ib-wn001:/bricks/brick1/gscratch0 Brick2: ib-wn002:/bricks/brick1/gscratch0 .... Brick31: ib-wn032:/bricks/brick1/gscratch0 Options Reconfigured: features.scrub-freq: daily features.scrub: Active features.bitrot: on cluster.nufa: on performance.readdir-ahead: on config.transport: rdma nfs.disable: true [root at wn001 glusterfs]#? [root at wn001 glusterfs]# gluster volume status Status of volume: scratch Gluster process?????????????????????????????TCP Port??RDMA Port??Online??Pid --------------------------------------------------------------------- --------- Brick ib- wn001:/bricks/brick1/gscratch0?????0?????????49155??????Y???????3380? Brick ib- wn002:/bricks/brick1/gscratch0?????0?????????49155??????Y???????3463? .... Brick ib- wn032:/bricks/brick1/gscratch0?????0?????????49152??????Y???????3496? Bitrot Daemon on ib-wn0001 ? ? ? ? ? ? ? ? ? N/A???????N/A????????Y???????9150? Scrubber Daemon on ib-wn001 ? ? ? ? ? ? ? ? N/A???????N/A????????Y???????9159? ..... Bitrot Daemon on ib- wn032???????????????????N/A???????N/A????????Y???????31107 Scrubber Daemon on ib- wn032?????????????????N/A???????N/A????????Y???????31114 ? Task Status of Volume scratch --------------------------------------------------------------------- --------- Task?????????????????: Rebalance??????????? ID???????????????????: 38373576-dcb5-469f-9ae1-42c56ff445e5 Status???????????????: completed??????????? ?