符永涛
2013-Feb-22 10:30 UTC
[Gluster-users] help posix_fallocate too slow on glusterfs client
Dear gluster experts, Recently I have encountered a problem about posix_fallocate performance on glusterfs client. I use posix_fallocate to allocate a file with specified size on glusterfs client. For example if I create a file with size of 1907658896, it will take about 20 seconds on glusterfs but on local xfs or ext4 it takes less than 1 second. What's the problem? How can I improve posix_fallocate performance for glusterfs? Thank you very much. BTW The volume info is as bellow: sudo gluster volume info volume_e Volume Name: volume_e Type: Distributed-Replicate Volume ID: 81702024-f327-4ae1-b06a-1f2b877d5ebb Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: glusterfs-test-dev001.qiyi.virtual:/mnt/xfsd/volume_e Brick2: glusterfs-test-dev002.qiyi.virtual:/mnt/xfsd/volume_e Brick3: glusterfs-test-dev003.qiyi.virtual:/mnt/xfsd/volume_e Brick4: glusterfs-test-dev004.qiyi.virtual:/mnt/xfsd/volume_e -- ???
Sejal1 S
2013-Feb-22 10:42 UTC
[Gluster-users] Sync status of underlying bricks for replicated volume
Hi Gluster Experts, I am trying in integrate glusterfs as a replication file system in my product. As a background, I will be creating a distributed replicated (glusterfs) volume on two bricks present on two different server. Can you please guide me to find out the sync status of all bricks under one volume i.e; given point of time whether all underlying bricks are in IN-SYNC, Out-Of-SYNC or synchronization is going on? Please note I am naive in the glusterfs. Thanks in anticipation. Regards Sejal =====-----=====-----====Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130222/40eb9bfe/attachment.html>