Hi, I just installed glusterfs 3.0.3 on three servers, and copied 300GB small files in thousands of directories from our old file system to glusterfs. The 300GB files are replicated to two of the three servers. It took us about 16 hours. However, after it finished, the glusterfs process still hold about 700MB memory, which is about 30% of the total memory. Could someone tell me why? And is there a solution for this? By the way, we suppose to use internal ip address for clients to connect servers. But it sometimes use external ip address to connect servers. How can I solve this problem? The following are the server vol file and error messages. volume posix1 type storage/posix option directory /gfs/r2/f1 end-volume volume locks1 type features/locks subvolumes posix1 end-volume volume brick1 type performance/io-threads option thread-count 8 subvolumes locks1 end-volume volume server-tcp type protocol/server option transport-type tcp option auth.addr.brick1.allow 192.168.0.* option transport.socket.listen-port 6991 option transport.socket.nodelay on subvolumes brick1 end-volume [2010-04-12 14:20:14] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.135.58.12:1017 [2010-04-12 14:20:14] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.135.58.12:1017 [2010-04-12 14:20:14] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1016 [2010-04-12 14:20:14] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1016 [2010-04-12 14:20:14] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1017 disconnected [2010-04-12 14:20:14] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1016 disconnected [2010-04-12 14:20:14] N [server-helpers.c:842:server_connection_destroy] server-tcp: destroyed connection of s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 [2010-04-12 14:20:24] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1013 [2010-04-12 14:20:24] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1013 [2010-04-12 14:20:24] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1012 [2010-04-12 14:20:24] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1012 [2010-04-12 14:20:24] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1013 disconnected [2010-04-12 14:20:24] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1012 disconnected [2010-04-12 14:20:24] N [server-helpers.c:842:server_connection_destroy] server-tcp: destroyed connection of s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 [2010-04-12 14:20:34] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1009 [2010-04-12 14:20:34] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1009 [2010-04-12 14:20:34] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1008 [2010-04-12 14:20:34] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1008 [2010-04-12 14:20:34] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1009 disconnected [2010-04-12 14:20:34] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1008 disconnected [2010-04-12 14:20:34] N [server-helpers.c:842:server_connection_destroy] server-tcp: destroyed connection of s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 [2010-04-12 14:20:44] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1001 [2010-04-12 14:20:44] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1001 [2010-04-12 14:20:44] E [authenticate.c:234:gf_authenticate] auth: no authentication module is interested in accepting remote-client 66.133.59.19:1000 [2010-04-12 14:20:44] E [server-protocol.c:5862:mop_setvolume] server-tcp: Cannot authenticate client from 66.133.59.19:1000 [2010-04-12 14:20:44] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1001 disconnected [2010-04-12 14:20:44] N [server-protocol.c:6788:notify] server-tcp: 66.133.59.19:1000 disconnected
Krzysztof Strasburger
2010-Apr-13 09:24 UTC
[Gluster-users] Memory usage high on server sides
On Tue, Apr 13, 2010 at 11:45:56AM +1000, Chris Jin wrote:> Hi, I just installed glusterfs 3.0.3 on three servers, and copied 300GB > small files in thousands of directories from our old file system to > glusterfs. The 300GB files are replicated to two of the three servers. > It took us about 16 hours. However, after it finished, the glusterfs > process still hold about 700MB memory, which is about 30% of the total > memory. Could someone tell me why? And is there a solution for this?The problem is known (see, for example, the "old story..." thread), but there is no solution as for now. Krzysztof
Hi, I got one more test today. The copying has already run for 24 hours and the memory usage is about 800MB, 39.4% of the total. But there is no external IP connection error. Is this a memory leak? On Tue, 2010-04-13 at 11:45 +1000, Chris Jin wrote:> Hi, I just installed glusterfs 3.0.3 on three servers, and copied 300GB > small files in thousands of directories from our old file system to > glusterfs. The 300GB files are replicated to two of the three servers. > It took us about 16 hours. However, after it finished, the glusterfs > process still hold about 700MB memory, which is about 30% of the total > memory. Could someone tell me why? And is there a solution for this? > > By the way, we suppose to use internal ip address for clients to connect > servers. But it sometimes use external ip address to connect servers. > How can I solve this problem? > > The following are the server vol file and error messages. > > volume posix1 > type storage/posix > option directory /gfs/r2/f1 > end-volume > > volume locks1 > type features/locks > subvolumes posix1 > end-volume > > volume brick1 > type performance/io-threads > option thread-count 8 > subvolumes locks1 > end-volume > > volume server-tcp > type protocol/server > option transport-type tcp > option auth.addr.brick1.allow 192.168.0.* > option transport.socket.listen-port 6991 > option transport.socket.nodelay on > subvolumes brick1 > end-volume > > > > [2010-04-12 14:20:14] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.135.58.12:1017 > [2010-04-12 14:20:14] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.135.58.12:1017 > [2010-04-12 14:20:14] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1016 > [2010-04-12 14:20:14] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1016 > [2010-04-12 14:20:14] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1017 disconnected > [2010-04-12 14:20:14] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1016 disconnected > [2010-04-12 14:20:14] N [server-helpers.c:842:server_connection_destroy] > server-tcp: destroyed connection of > s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 > [2010-04-12 14:20:24] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1013 > [2010-04-12 14:20:24] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1013 > [2010-04-12 14:20:24] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1012 > [2010-04-12 14:20:24] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1012 > [2010-04-12 14:20:24] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1013 disconnected > [2010-04-12 14:20:24] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1012 disconnected > [2010-04-12 14:20:24] N [server-helpers.c:842:server_connection_destroy] > server-tcp: destroyed connection of > s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 > [2010-04-12 14:20:34] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1009 > [2010-04-12 14:20:34] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1009 > [2010-04-12 14:20:34] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1008 > [2010-04-12 14:20:34] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1008 > [2010-04-12 14:20:34] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1009 disconnected > [2010-04-12 14:20:34] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1008 disconnected > [2010-04-12 14:20:34] N [server-helpers.c:842:server_connection_destroy] > server-tcp: destroyed connection of > s7.pikiware.com-4657-2010/04/12-08:09:39:823330-s2 > [2010-04-12 14:20:44] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1001 > [2010-04-12 14:20:44] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1001 > [2010-04-12 14:20:44] E [authenticate.c:234:gf_authenticate] auth: no > authentication module is interested in accepting remote-client > 66.133.59.19:1000 > [2010-04-12 14:20:44] E [server-protocol.c:5862:mop_setvolume] > server-tcp: Cannot authenticate client from 66.133.59.19:1000 > [2010-04-12 14:20:44] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1001 disconnected > [2010-04-12 14:20:44] N [server-protocol.c:6788:notify] server-tcp: > 66.133.59.19:1000 disconnected > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >
Hi Chris, I would like your help in debugging this further. To start with, I would like to get the system information and the test information. You mentioned you are copying data from your old system to the new system. The new system has 3 servers. Problems you saw - 1) High memory usage on client where gluster volume is mounted 2) High memory usage on server 3) 2 days to copy 300 GB data Is that a correct summary of the problems you saw ? About the config, can you provide the following for both old and new systems - 1) OS and kernel level on gluster servers and clients 2) volume file from servers and clients 3) Filesystem type of backend gluster subvolumes 4) How close to full the backend subvolumes are 5) The exact copy command .. did you mount the volumes from old and new system on a single machine and did cp or used rsync or some other method ? If something more than just a cp, please send the exact command line you used. 6) How many files/directories ( tentative ) in that 300GB data ( would help in trying to reproduce inhouse with a smaller test bed ). 7) Was there other load on the new or old system ? 8) Any other patterns you noticed. Thanks a lot for helping to debug the problem. Regards, Tejas. ----- Original Message ----- From: "Chris Jin" <chris at pikicentral.com> To: "Krzysztof Strasburger" <strasbur at chkw386.ch.pwr.wroc.pl> Cc: "gluster-users" <gluster-users at gluster.org> Sent: Thursday, April 15, 2010 7:52:35 AM Subject: Re: [Gluster-users] Memory usage high on server sides Hi Krzysztof, Thanks for your replies. And you are right, the server process should be glusterfsd. But I did mean servers. After two days copying, the two processes took almost 70% of the total memory. I am just thinking one more process will bring our servers down. $ps auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 26472 2.2 29.1 718100 600260 ? Ssl Apr09 184:09 glusterfsd -f /etc/glusterfs/servers/r2/f1.vol root 26485 1.8 39.8 887744 821384 ? Ssl Apr09 157:16 glusterfsd -f /etc/glusterfs/servers/r2/f2.vol At the meantime, the client side seems OK. $ps auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 19692 1.3 0.0 262148 6980 ? Ssl Apr12 61:33 /sbin/glusterfs --log-level=NORMAL --volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol /gfs/r2/f2 Any ideas? On Wed, 2010-04-14 at 10:16 +0200, Krzysztof Strasburger wrote:> On Wed, Apr 14, 2010 at 06:33:15AM +0200, Krzysztof Strasburger wrote: > > On Wed, Apr 14, 2010 at 09:22:09AM +1000, Chris Jin wrote: > > > Hi, I got one more test today. The copying has already run for 24 hours > > > and the memory usage is about 800MB, 39.4% of the total. But there is no > > > external IP connection error. Is this a memory leak? > > Seems to be, and a very persistent one. Present in glusterfs at least > > since version 1.3 (the oldest I used). > > Krzysztof > I corrected the subject, as the memory usage is high on the client side > (glusterfs is the client process, glusterfsd is the server and it never > used that lot of memory on my site). > I did some more tests with logging. Accordingly to my old valgrind report, > huge amounts of memory were still in use at exit, and these were allocated > in __inode_create and __dentry_create. So I added log points in these functions > and performed the "du test", ie. mounted the glusterfs directory containing > a large number of files with log level set to TRACE , ran du on it, > then echo 3 > /proc/sys/vm/drop_caches, waiting a while until the log file > stopped growing, finally umounted and checked the (huge) logfile: > prkom13:~# grep inode_create /var/log/glusterfs/root-loop-test.log |wc -l > 151317 > prkom13:~# grep inode_destroy /var/log/glusterfs/root-loop-test.log |wc -l > 151316 > prkom13:~# grep dentry_create /var/log/glusterfs/root-loop-test.log |wc -l > 158688 > prkom13:~# grep dentry_unset /var/log/glusterfs/root-loop-test.log |wc -l > 158688 > > Do you see? Everything seems to be OK, a number of inodes created, 1 less > destroyed (probably the root inode), same number of dentries created and > destroyed. The memory should be freed (there are calls to free in inode_destroy > and dentry_unset functions), but it is not. Any ideas, what is going on? > Glusterfs developers - is something kept in the lists, where inodes > and dentries live, and interleaved with these inodes and entries, so that > no memory page can be unmapped? > We should also look at the kernel - why it does not send forgets immediately, > even with drop_caches=3? > Krzysztof > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >_______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Thanks, Chris. This is very good information to start with. To summarize this, when you copy lot of small files from nfs mount to glusterfs mount, the copy is slow and at the end of it you see the glusterfs servers still holding a lot of memory after the copy is done. The clients do seem to release the memory though. No caching translators are being used. Will reproduce this inhouse and work here. Will get back if more information is required. Thanks a lot for your help. Regards, Tejas. ----- Original Message ----- From: "Chris Jin" <chris at pikicentral.com> To: "Tejas N. Bhise" <tejas at gluster.com> Cc: "gluster-users" <gluster-users at gluster.org> Sent: Thursday, April 15, 2010 9:48:42 AM Subject: Re: [Gluster-users] Memory usage high on server sides Hi Tejas,> Problems you saw - > > 1) High memory usage on client where gluster volume is mountedMemory usage for clients is 0% after copying. $ps auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 19692 1.3 0.0 262148 6980 ? Ssl Apr12 61:33 /sbin/glusterfs --log-level=NORMAL --volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol /gfs/r2/f2> 2) High memory usage on serverYes. $ps auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 26472 2.2 29.1 718100 600260 ? Ssl Apr09 184:09 glusterfsd -f /etc/glusterfs/servers/r2/f1.vol root 26485 1.8 39.8 887744 821384 ? Ssl Apr09 157:16 glusterfsd -f /etc/glusterfs/servers/r2/f2.vol> 3) 2 days to copy 300 GB dataMore than 700GB. There are two folders. The first one is copied to server 1 and server 2, and the second one is copied to server 2 and server 3. The vol files are below.> About the config, can you provide the following for both old and new systems - > > 1) OS and kernel level on gluster servers and clientsDebian Kernel 2.6.18-6-amd64 $uname -a Linux fs2 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64 GNU/Linux> 2) volume file from servers and clients#####Server Vol file (f1.vol) # The same settings for f2.vol and f3.vol, just different dirs and ports # f1 f3 for Server 1, f1 f2 for Server 2, f2 f3 for Server 3 volume posix1 type storage/posix option directory /gfs/r2/f1 end-volume volume locks1 type features/locks subvolumes posix1 end-volume volume brick1 type performance/io-threads option thread-count 8 subvolumes locks1 end-volume volume server-tcp type protocol/server option transport-type tcp option auth.addr.brick1.allow 192.168.0.* option transport.socket.listen-port 6991 option transport.socket.nodelay on subvolumes brick1 end-volume #####Client Vol file (c1.vol) # The same settings for c2.vol and c3.vol # s2 s3 for c2, s3 s1 for c3 volume s1 type protocol/client option transport-type tcp option remote-host 192.168.0.31 option transport.socket.nodelay on option transport.remote-port 6991 option remote-subvolume brick1 end-volume volume s2 type protocol/client option transport-type tcp option remote-host 192.168.0.32 option transport.socket.nodelay on option transport.remote-port 6991 option remote-subvolume brick1 end-volume volume mirror type cluster/replicate option data-self-heal off option metadata-self-heal off option entry-self-heal off subvolumes s1 s2 end-volume volume writebehind type performance/write-behind option cache-size 100MB option flush-behind off subvolumes mirror end-volume volume iocache type performance/io-cache option cache-size `grep 'MemTotal' /proc/meminfo | awk '{print $2 * 0.2 / 1024}' | cut -f1 -d.`MB option cache-timeout 1 subvolumes writebehind end-volume volume quickread type performance/quick-read option cache-timeout 1 option max-file-size 256Kb subvolumes iocache end-volume volume statprefetch type performance/stat-prefetch subvolumes quickread end-volume> 3) Filesystem type of backend gluster subvolumesext3> 4) How close to full the backend subvolumes areNew 2T hard disks for each server.> 5) The exact copy command .. did you mount the volumes from > old and new system on a single machine and did cp or used rsync > or some other method ? If something more than just a cp, please > send the exact command line you used.The old file system uses DRBD and NFS. The exact command is sudo cp -R -v -p -P /nfsmounts/nfs3/photo .> 6) How many files/directories ( tentative ) in that 300GB data ( would help in > trying to reproduce inhouse with a smaller test bed ).I cannot tell, but the file sizes are between 1KB to 200KB, average around 20KB.> 7) Was there other load on the new or old system ?The old systems are still used for web servers. The new systems are on the same servers but different hard disks.> 8) Any other patterns you noticed.There is once that one client tried to connect one server with external IP address. Using distribute translator across all three mirrors will make system twice slower than using three mounted folders. Is this information enough? Please take a look. Regards, Chris
Hi Chris, http://patches.gluster.com/patch/3151/ Can you please apply this patch and see if this works for you? Thanks Regards, Raghavendra Bhat> Tejas, > > We still have hundreds of GBs to copy, and have not put the new file > system into the test. So far the clients works all fine. I mean the > commands like ls, mkdir, touch, and etc. > > Thanks again for your time. > > regards, > > Chris > > On Wed, 2010-04-14 at 23:04 -0600, Tejas N. Bhise wrote: > > Chris, > > > > By the way, after the copy is done, how is the system responding to > > regular access ? In the sense, was the problem with copy also > > carried forward as more trouble seen with subsequent access of > > data over glusterfs ? > > > > Regards, > > Tejas. > > > > ----- Original Message ----- > > From: "Chris Jin" <chris at pikicentral.com> > > To: "Tejas N. Bhise" <tejas at gluster.com> > > Cc: "gluster-users" <gluster-users at gluster.org> > > Sent: Thursday, April 15, 2010 9:48:42 AM > > Subject: Re: [Gluster-users] Memory usage high on server sides > > > > Hi Tejas, > > > > > Problems you saw - > > > > > > 1) High memory usage on client where gluster volume is mounted > > > > Memory usage for clients is 0% after copying. > > $ps auxf > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME > COMMAND > > root 19692 1.3 0.0 262148 6980 ? Ssl Apr12 > > 61:33 /sbin/glusterfs --log-level=NORMAL > > --volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol > /gfs/r2/f2 > > > > > 2) High memory usage on server > > Yes. > > $ps auxf > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME > COMMAND > > root 26472 2.2 29.1 718100 600260 ? Ssl Apr09 184:09 > > glusterfsd -f /etc/glusterfs/servers/r2/f1.vol > > root 26485 1.8 39.8 887744 821384 ? Ssl Apr09 157:16 > > glusterfsd -f /etc/glusterfs/servers/r2/f2.vol > > > > > 3) 2 days to copy 300 GB data > > More than 700GB. There are two folders. The first one is copied to > > server 1 and server 2, and the second one is copied to server 2 and > > server 3. The vol files are below. > > > > > About the config, can you provide the following for both old and > new systems - > > > > > > 1) OS and kernel level on gluster servers and clients > > Debian Kernel 2.6.18-6-amd64 > > > > $uname -a > > Linux fs2 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64 > > GNU/Linux > > > > > 2) volume file from servers and clients > > > > #####Server Vol file (f1.vol) > > # The same settings for f2.vol and f3.vol, just different dirs and > ports > > # f1 f3 for Server 1, f1 f2 for Server 2, f2 f3 for Server 3 > > volume posix1 > > type storage/posix > > option directory /gfs/r2/f1 > > end-volume > > > > volume locks1 > > type features/locks > > subvolumes posix1 > > end-volume > > > > volume brick1 > > type performance/io-threads > > option thread-count 8 > > subvolumes locks1 > > end-volume > > > > volume server-tcp > > type protocol/server > > option transport-type tcp > > option auth.addr.brick1.allow 192.168.0.* > > option transport.socket.listen-port 6991 > > option transport.socket.nodelay on > > subvolumes brick1 > > end-volume > > > > #####Client Vol file (c1.vol) > > # The same settings for c2.vol and c3.vol > > # s2 s3 for c2, s3 s1 for c3 > > volume s1 > > type protocol/client > > option transport-type tcp > > option remote-host 192.168.0.31 > > option transport.socket.nodelay on > > option transport.remote-port 6991 > > option remote-subvolume brick1 > > end-volume > > > > volume s2 > > type protocol/client > > option transport-type tcp > > option remote-host 192.168.0.32 > > option transport.socket.nodelay on > > option transport.remote-port 6991 > > option remote-subvolume brick1 > > end-volume > > > > volume mirror > > type cluster/replicate > > option data-self-heal off > > option metadata-self-heal off > > option entry-self-heal off > > subvolumes s1 s2 > > end-volume > > > > volume writebehind > > type performance/write-behind > > option cache-size 100MB > > option flush-behind off > > subvolumes mirror > > end-volume > > > > volume iocache > > type performance/io-cache > > option cache-size `grep 'MemTotal' /proc/meminfo | awk '{print > $2 * > > 0.2 / 1024}' | cut -f1 -d.`MB > > option cache-timeout 1 > > subvolumes writebehind > > end-volume > > > > volume quickread > > type performance/quick-read > > option cache-timeout 1 > > option max-file-size 256Kb > > subvolumes iocache > > end-volume > > > > volume statprefetch > > type performance/stat-prefetch > > subvolumes quickread > > end-volume > > > > > > > 3) Filesystem type of backend gluster subvolumes > > ext3 > > > > > 4) How close to full the backend subvolumes are > > New 2T hard disks for each server. > > > > > 5) The exact copy command .. did you mount the volumes from > > > old and new system on a single machine and did cp or used rsync > > > or some other method ? If something more than just a cp, please > > > send the exact command line you used. > > The old file system uses DRBD and NFS. > > The exact command is > > sudo cp -R -v -p -P /nfsmounts/nfs3/photo . > > > > > 6) How many files/directories ( tentative ) in that 300GB data ( > would help in > > > trying to reproduce inhouse with a smaller test bed ). > > I cannot tell, but the file sizes are between 1KB to 200KB, average > > around 20KB. > > > > > 7) Was there other load on the new or old system ? > > The old systems are still used for web servers. > > The new systems are on the same servers but different hard disks. > > > > > 8) Any other patterns you noticed. > > There is once that one client tried to connect one server with > external > > IP address. > > Using distribute translator across all three mirrors will make > system > > twice slower than using three mounted folders. > > > > Is this information enough? > > > > Please take a look. > > > > Regards, > > > > Chris > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users