Hello all, I've configured 4 bricks over a GigE network, however I'm getting very slow performance for writing to my gluster share. Just set this up this week, and here's what I'm seeing: [root at vm-container-0-0 ~]# gluster --version | head -1 glusterfs 3.2.2 built on Jul 14 2011 13:34:25 [root at vm-container-0-0 pifs]# gluster volume info Volume Name: pifs Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: vm-container-0-0:/gluster Brick2: vm-container-0-1:/gluster Brick3: vm-container-0-2:/gluster Brick4: vm-container-0-3:/gluster The 4 systems, are each storage bricks and storage clients, mounting gluster like so: [root at vm-container-0-1 ~]# df -h /pifs/ Filesystem Size Used Avail Use% Mounted on glusterfs#127.0.0.1:pifs 1.8T 848M 1.7T 1% /pifs iperf show's network through put looking good: [root at vm-container-0-0 pifs]# iperf -c vm-container-0-1 ------------------------------------------------------------ Client connecting to vm-container-0-1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.19.127.254 port 53441 connected with 10.19.127.253 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec Then, writing to the local disk is pretty fast: [root at vm-container-0-0 pifs]# dd if=/dev/zero of=/root/dd_test.img bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes (2.1 GB) copied, 4.8066 seconds, 436 MB/s However, writes to the gluster share, are abysmally slow: [root at vm-container-0-0 pifs]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes (2.1 GB) copied, 241.866 seconds, 8.7 MB/s Other than the fact that it's quite slow, it seems to be very stable. iozone testing shows about the same results. Any help troubleshooting would be much appreciated. Thanks! --joey -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110809/a2c80139/attachment.html>
On Wednesday 10 August 2011 02:56 AM, Joey McDonald wrote:> Hello all, > > I've configured 4 bricks over a GigE network, however I'm getting very > slow performance for writing to my gluster share. > > Just set this up this week, and here's what I'm seeing:A few questions - 1. Are these baremetal systems or are they Virtual machines ? 2. What is the amount of RAM of each of these systems ? 3. How many CPUs do they have ? 4. Can you also perform the dd on /gluster as opposed to /root to check the backend performance ? 5. What is your disk backend ? Is it direct attached or is it an array ? 6. What is the backend filesystem ? 7. Can you run a simple scp of about 10M between any two of these systems and report the speed ? Pavan> > [root at vm-container-0-0 ~]# gluster --version | head -1 > glusterfs 3.2.2 built on Jul 14 2011 13:34:25 > > [root at vm-container-0-0 pifs]# gluster volume info > > Volume Name: pifs > Type: Distributed-Replicate > Status: Started > Number of Bricks: 2 x 2 = 4 > Transport-type: tcp > Bricks: > Brick1: vm-container-0-0:/gluster > Brick2: vm-container-0-1:/gluster > Brick3: vm-container-0-2:/gluster > Brick4: vm-container-0-3:/gluster > > The 4 systems, are each storage bricks and storage clients, mounting > gluster like so: > > [root at vm-container-0-1 ~]# df -h /pifs/ > Filesystem Size Used Avail Use% Mounted on > glusterfs#127.0.0.1:pifs > 1.8T 848M 1.7T 1% /pifs > > iperf show's network through put looking good: > > [root at vm-container-0-0 pifs]# iperf -c vm-container-0-1 > ------------------------------------------------------------ > Client connecting to vm-container-0-1, TCP port 5001 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.19.127.254 port 53441 connected with 10.19.127.253 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec > > > Then, writing to the local disk is pretty fast: > > [root at vm-container-0-0 pifs]# dd if=/dev/zero of=/root/dd_test.img bs=1M > count=2000 > 2000+0 records in > 2000+0 records out > 2097152000 bytes (2.1 GB) copied, 4.8066 seconds, 436 MB/s > > However, writes to the gluster share, are abysmally slow: > > [root at vm-container-0-0 pifs]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M > count=2000 > 2000+0 records in > 2000+0 records out > 2097152000 bytes (2.1 GB) copied, 241.866 seconds, 8.7 MB/s > > Other than the fact that it's quite slow, it seems to be very stable. > > iozone testing shows about the same results. > > Any help troubleshooting would be much appreciated. Thanks! > > --joey > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald <joey at scare.org> wrote:> Hi Pavan, > > Thanks for your quick reply, comments inline: > > >> 1. Are these baremetal systems or are they Virtual machines ? >> > > Bare metal systems. > > > >> 2. What is the amount of RAM of each of these systems ? >> > > They all have 4194304 kB of memory. > > >> >> 3. How many CPUs do they have ? >> > > They each have 8 procs. > > >> 4. Can you also perform the dd on /gluster as opposed to /root to check >> the backend performance ? >> > > Sure, here is that output: > > [root at vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img bs=1M > count=2000 > 2000+0 records in > 2000+0 records out > 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s > > > >> 5. What is your disk backend ? Is it direct attached or is it an array ? >> > > Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is /dev/sda): > > [root at vm-container-0-0 ~]# hdparm -i /dev/sdb > > /dev/sdb: > > Model=WDC WD1002FBYS-02A6B0 , FwRev=03.00C06, SerialNo> WD-WMATV5311442 > Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } > RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 > BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0? > CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455 > IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} > PIO modes: pio0 pio3 pio4 > DMA modes: mdma0 mdma1 mdma2 > UDMA modes: udma0 udma1 udma2 > AdvancedPM=no WriteCache=enabled > Drive conforms to: Unspecified: ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3 > ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7 > > * signifies the current active mode > > > >> 6. What is the backend filesystem ? >> > > ext3 > > >> 7. Can you run a simple scp of about 10M between any two of these systems >> and report the speed ? >> > > Sure, output: > > [root at vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img . > Warning: Permanently added 'vm-container-0-0' (RSA) to the list of known > hosts. > root at vm-container-0-0's password: > dd_test.img > 100% 2000MB > 39.2MB/s 00:51 > > > --joey > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110809/45ae84a9/attachment.html>