Hi all, I have a rather simplistic question, there are dirs that contain a lot of small files in a 2x replica set accessed natively on the clients. Due to the directory file number; it fails to show the dir contents from clients. In case of move or deletion of the dirs natively and from the server's view of the dirs , how does glusterfs converge or "heal" if you can call it the dirs as emptied or as if moved? I am running on Glusterfs-server and Glusterfs-client version: 3.10.12. To add more details,it is that we learned it the hard way that our app is shipping too small files into dirs with daily accumulaiton, accesed for serving by an nginx. Here is a little more info: # gluster volume info Volume Name: gv1 Type: Replicate Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: IMG-01:/images/storage/brick1 Brick2: IMG-02:/images/storage/brick1 Options Reconfigured: nfs.disable: true diagnostics.count-fop-hits: on diagnostics.latency-measurement: on server.statedump-path: /tmp performance.readdir-ahead: on # gluster volume status Status of volume: gv1 Gluster process???????????????????????????? TCP Port? RDMA Port Online? Pid ------------------------------------------------------------------------------ Brick IMG-01:/images/storage/brick1???????? 49152???? 0 Y?????? 3577 Brick IMG-02:/images/storage/brick1???????? 49152???? 0 Y?????? 21699 Self-heal Daemon on localhost?????????????? N/A?????? N/A Y?????? 24813 Self-heal Daemon on IMG-01????????????????? N/A?????? N/A Y?????? 3560 Task Status of Volume gv1 ------------------------------------------------------------------------------ There are no active volume tasks
Vlad Kopylov
2018-Jul-05 02:06 UTC
[Gluster-users] delettion of files in gluster directories
If you delete those from the bricks it will start healing them - restoring from other bricks I have similar issue with email storage which uses maildir format with millions of small files doing delete on the server takes days sometimes worth recreating volumes wiping .glusterfs on bricks, deleting files on bricks, creating volumes again and repopulating .glusterfs by querying attr https://lists.gluster.org/pipermail/gluster-users/2018-July/034310.html On Wed, Jul 4, 2018 at 9:57 AM, hsafe <hsafe at devopt.net> wrote:> Hi all, > > I have a rather simplistic question, there are dirs that contain a lot of > small files in a 2x replica set accessed natively on the clients. Due to > the directory file number; it fails to show the dir contents from clients. > > In case of move or deletion of the dirs natively and from the server's > view of the dirs , how does glusterfs converge or "heal" if you can call it > the dirs as emptied or as if moved? > > I am running on Glusterfs-server and Glusterfs-client version: 3.10.12. > > To add more details,it is that we learned it the hard way that our app is > shipping too small files into dirs with daily accumulaiton, accesed for > serving by an nginx. > > Here is a little more info: > > # gluster volume info > > Volume Name: gv1 > Type: Replicate > Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: IMG-01:/images/storage/brick1 > Brick2: IMG-02:/images/storage/brick1 > Options Reconfigured: > nfs.disable: true > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > server.statedump-path: /tmp > performance.readdir-ahead: on > # gluster volume status > Status of volume: gv1 > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick IMG-01:/images/storage/brick1 49152 0 Y 3577 > Brick IMG-02:/images/storage/brick1 49152 0 Y 21699 > Self-heal Daemon on localhost N/A N/A Y 24813 > Self-heal Daemon on IMG-01 N/A N/A Y 3560 > > Task Status of Volume gv1 > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180704/039ffcd3/attachment.html>