higkoohk
2013-Aug-14 08:50 UTC
[Gluster-users] The file size increased much when copy into a stripe volume ?
copy a file to glusterfs , and you'll find the file size much bigger! # gluster --version glusterfs 3.4.0 built on Aug 6 2013 11:17:07 1. Create a file `dd if=/dev/zero of=/data/20Gfile bs=1G count=20` 2. Copy the file to mount point of glusterfs `cp /data/20Gfile /mnt/stripe/` 3. The use ls/du/md5sum to see the file size and file hash,you will see many diff. ---------- # du -sh /data/20Gfile /data/glusterfs/stripe/20Gfile /mnt/stripe/20Gfile 20G /data/20Gfile (Source file) 20G /data/glusterfs/stripe/20Gfile (Real Storage Dir) 160G /mnt/stripe/20Gfile (Mount Gluster Dir) ---------- # ls -l /data/20Gfile /data/glusterfs/stripe/20Gfile /mnt/stripe/20Gfile -rw-r--r-- 1 root root 21474836480 Aug 14 16:25 /data/20Gfile -rw-r--r-- 2 root root 21473918976 Aug 14 16:32 /data/glusterfs/stripe/20Gfile -rw-r--r-- 1 root root 21474836480 Aug 14 16:32 /mnt/stripe/20Gfile ---------- # md5sum /data/20Gfile /data/glusterfs/stripe/20Gfile /mnt/stripe/20Gfile 24eeb2845cbfda238b78fa165c21607d /data/20Gfile f958a2de2f03f07a374300ae565f6d29 /data/glusterfs/stripe/20Gfile 24eeb2845cbfda238b78fa165c21607d /mnt/stripe/20Gfile ---------- The file hash of /data/glusterfs/stripe/20Gfile on each node are not same too. ---------- Are they should same as each other? Why not the file split into 8 node? Info: Linux agent25.higkoo.org 2.6.32-358.2.1.el6.x86_64 gluster volume create stripe stripe 8 agent{25,26,27,28,29,30,31,32}.higkoo.org:/data/glusterfs/stripe -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130814/51d080e4/attachment.html>
Brian Foster
2013-Aug-14 10:54 UTC
[Gluster-users] The file size increased much when copy into a stripe volume ?
On 08/14/2013 04:50 AM, higkoohk wrote:> copy a file to glusterfs , and you'll find the file size much bigger! > > # gluster --version > glusterfs 3.4.0 built on Aug 6 2013 11:17:07 > > 1. Create a file `dd if=/dev/zero of=/data/20Gfile bs=1G count=20` > 2. Copy the file to mount point of glusterfs `cp /data/20Gfile /mnt/stripe/` > 3. The use ls/du/md5sum to see the file size and file hash,you will see > many diff. > > ---------- > # du -sh /data/20Gfile /data/glusterfs/stripe/20Gfile > /mnt/stripe/20Gfile > 20G /data/20Gfile (Source file) > 20G /data/glusterfs/stripe/20Gfile (Real Storage Dir) > 160G /mnt/stripe/20Gfile (Mount Gluster Dir) > ---------- > # ls -l /data/20Gfile /data/glusterfs/stripe/20Gfile > /mnt/stripe/20Gfile > -rw-r--r-- 1 root root 21474836480 Aug 14 16:25 /data/20Gfile > -rw-r--r-- 2 root root 21473918976 Aug 14 16:32 > /data/glusterfs/stripe/20Gfile > -rw-r--r-- 1 root root 21474836480 Aug 14 16:32 /mnt/stripe/20Gfile > ---------- > # md5sum /data/20Gfile /data/glusterfs/stripe/20Gfile > /mnt/stripe/20Gfile > 24eeb2845cbfda238b78fa165c21607d /data/20Gfile > f958a2de2f03f07a374300ae565f6d29 /data/glusterfs/stripe/20Gfile > 24eeb2845cbfda238b78fa165c21607d /mnt/stripe/20Gfile > ---------- > The file hash of /data/glusterfs/stripe/20Gfile on each node are not > same too. > ---------- >On a striped volume, each node should contain a unique subset of the original file. The content/md5sum won't match across nodes. So long as the content of the source file matches what's stored on the mount, things should be Ok. Are you running on top of XFS? If so, the extra space is probably preallocation in the backend filesystem. Can you run the following command and retry your test (copy the file again)? gluster volume set <volname> cluster.stripe-coalesce enable Brian> Are they should same as each other? > Why not the file split into 8 node? > > Info: > Linux agent25.higkoo.org <http://agent25.higkoo.org> > 2.6.32-358.2.1.el6.x86_64 > gluster volume create stripe stripe 8 > agent{25,26,27,28,29,30,31,32}.higkoo.org:/data/glusterfs/stripe > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >