So I stopped a node to check the BIOS and after it went up, the
rebalance kicked in. I was looking for those kind of speeds on a normal
write. The rebalance is much faster than my rsync/cp.
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png
Best Regards
Ivan Dimitrov
On 8/10/12 1:23 PM, Ivan Dimitrov wrote:> Hello
> What am I doing wrong?!?
>
> I have a test setup with 4 identical servers with 2 disks each in
> distribute-replicate 2. All servers are connected to a GB switch.
>
> I am experiencing really slow speeds at anything I do. Slow write,
> slow read, not to mention random write/reads.
>
> Here is an example:
> random-files is a directory with 32768 files with average size 16kb.
> [root at gltclient]:~# rsync -a /root/speedtest/random-files/
> /home/gltvolume/
> ^^ This will take more than 3 hours.
>
> On any of the servers if I do "iostat" the disks are not loaded
at all:
>
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png
>
>
> This is similar result for all servers.
>
> Here is an example of simple "ls" command on the content.
> [root at gltclient]:~# unalias ls
> [root at gltclient]:~# /usr/bin/time -f "%e seconds" ls
/home/gltvolume/
> | wc -l
> 2.81 seconds
> 5393
>
> almost 3 seconds to display 5000 files?!?! When they are 32,000, the
> ls will take around 35-45 seconds.
>
> This directory is on local disk:
> [root at gltclient]:~# /usr/bin/time -f "%e seconds" ls
> /root/speedtest/random-files/ | wc -l
> 1.45 seconds
> 32768
>
> [root at gltclient]:~# /usr/bin/time -f "%e seconds" cat
> /home/gltvolume/* >/dev/null
> 190.50 seconds
>
> [root at gltclient]:~# /usr/bin/time -f "%e seconds" du -sh
/home/gltvolume/
> 126M /home/gltvolume/
> 75.23 seconds
>
>
> Here is the volume information.
>
> [root at glt1]:~# gluster volume info
>
> Volume Name: gltvolume
> Type: Distributed-Replicate
> Volume ID: 16edd852-8d23-41da-924d-710b753bb374
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: 1.1.74.246:/home/sda3
> Brick2: glt2.network.net:/home/sda3
> Brick3: 1.1.74.246:/home/sdb1
> Brick4: glt2.network.net:/home/sdb1
> Brick5: glt3.network.net:/home/sda3
> Brick6: gltclient.network.net:/home/sda3
> Brick7: glt3.network.net:/home/sdb1
> Brick8: gltclient.network.net:/home/sdb1
> Options Reconfigured:
> performance.io-thread-count: 32
> performance.cache-size: 256MB
> cluster.self-heal-daemon: on
>
>
> [root at glt1]:~# gluster volume status all detail
> Status of volume: gltvolume
>
------------------------------------------------------------------------------
>
> Brick : Brick 1.1.74.246:/home/sda3
> Port : 24009
> Online : Y
> Pid : 1479
> File System : ext4
> Device : /dev/sda3
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 179.3GB
> Total Disk Space : 179.7GB
> Inode Count : 11968512
> Free Inodes : 11901550
>
------------------------------------------------------------------------------
>
> Brick : Brick glt2.network.net:/home/sda3
> Port : 24009
> Online : Y
> Pid : 1589
> File System : ext4
> Device : /dev/sda3
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 179.3GB
> Total Disk Space : 179.7GB
> Inode Count : 11968512
> Free Inodes : 11901550
>
------------------------------------------------------------------------------
>
> Brick : Brick 1.1.74.246:/home/sdb1
> Port : 24010
> Online : Y
> Pid : 1485
> File System : ext4
> Device : /dev/sdb1
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 228.8GB
> Total Disk Space : 229.2GB
> Inode Count : 15269888
> Free Inodes : 15202933
>
------------------------------------------------------------------------------
>
> Brick : Brick glt2.network.net:/home/sdb1
> Port : 24010
> Online : Y
> Pid : 1595
> File System : ext4
> Device : /dev/sdb1
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 228.8GB
> Total Disk Space : 229.2GB
> Inode Count : 15269888
> Free Inodes : 15202933
>
------------------------------------------------------------------------------
>
> Brick : Brick glt3.network.net:/home/sda3
> Port : 24009
> Online : Y
> Pid : 28963
> File System : ext4
> Device : /dev/sda3
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 179.3GB
> Total Disk Space : 179.7GB
> Inode Count : 11968512
> Free Inodes : 11906058
>
------------------------------------------------------------------------------
>
> Brick : Brick gltclient.network.net:/home/sda3
> Port : 24009
> Online : Y
> Pid : 3145
> File System : ext4
> Device : /dev/sda3
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 179.3GB
> Total Disk Space : 179.7GB
> Inode Count : 11968512
> Free Inodes : 11906058
>
------------------------------------------------------------------------------
>
> Brick : Brick glt3.network.net:/home/sdb1
> Port : 24010
> Online : Y
> Pid : 28969
> File System : ext4
> Device : /dev/sdb1
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 228.8GB
> Total Disk Space : 229.2GB
> Inode Count : 15269888
> Free Inodes : 15207375
>
------------------------------------------------------------------------------
>
> Brick : Brick gltclient.network.net:/home/sdb1
> Port : 24010
> Online : Y
> Pid : 3151
> File System : ext4
> Device : /dev/sdb1
> Mount Options : rw,noatime
> Inode Size : 256
> Disk Space Free : 228.8GB
> Total Disk Space : 229.2GB
> Inode Count : 15269888
> Free Inodes : 15207375
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>