Bill James
2016-Feb-11 18:27 UTC
[Gluster-users] [ovirt-users] ovirt glusterfs performance
thank you for the reply. We setup gluster using the names associated with NIC 2 IP. Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 That's NIC 2's IP. Using 'iftop -i eno2 -L 5 -t' : dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s Peak rate (sent/received/total): 281Mb 5.36Mb 282Mb Cumulative (sent/received/total): 1.96GB 14.6MB 1.97GB gluster volume info gv1: Options Reconfigured: performance.write-behind-window-size: 4MB performance.readdir-ahead: on performance.cache-size: 1GB performance.write-behind: off performance.write-behind: off didn't help. Neither did any other changes I've tried. There is no VM traffic on this VM right now except my test. On 02/10/2016 11:55 PM, Nir Soffer wrote:> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N <ravishankar at redhat.com> wrote: >> +gluster-users >> >> Does disabling 'performance.write-behind' give a better throughput? >> >> >> >> On 02/10/2016 11:06 PM, Bill James wrote: >>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar >>> performance. >>> Maybe my setup could use some adjustments? >>> >>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1. >>> Each node has 8 spindles configured in 1 array which is split using LVM >>> with one logical volume for system and one for gluster. >>> They each have 4 NICs, >>> NIC1 = ovirtmgmt >>> NIC2 = gluster (1GbE) > How do you ensure that gluster trafic is using this nic? > >>> NIC3 = VM traffic > How do you ensure that vm trafic is using this nic? > >>> I tried with default glusterfs settings > And did you find any difference? > >>> and also with: >>> performance.cache-size: 1GB >>> performance.readdir-ahead: on >>> performance.write-behind-window-size: 4MB >>> >>> [root at ovirt3 test scripts]# gluster volume info gv1 >>> >>> Volume Name: gv1 >>> Type: Replicate >>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2 >>> Status: Started >>> Number of Bricks: 1 x 3 = 3 >>> Transport-type: tcp >>> Bricks: >>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>> Options Reconfigured: >>> performance.cache-size: 1GB >>> performance.readdir-ahead: on >>> performance.write-behind-window-size: 4MB >>> >>> >>> Using simple dd test on VM in ovirt: >>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct > block size of 1G?! > > Try 1M (our default for storage operations) > >>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s >>> >>> Another VM not in ovirt using nfs: >>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct >>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s >>> >>> >>> Is that expected or is there a better way to set it up to get better >>> performance? > Adding Niels for advice. > >>> This email, its contents and .... > Please avoid this, this is a public mailing list, everything you write > here is public. > > NirI'll have to look into how to remove this sig for this mailing list.... Cloud Services for Business www.j2.com j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
Nir Soffer
2016-Feb-11 20:28 UTC
[Gluster-users] [ovirt-users] ovirt glusterfs performance
On Thu, Feb 11, 2016 at 8:27 PM, Bill James <bill.james at j2.com> wrote:> thank you for the reply. > > We setup gluster using the names associated with NIC 2 IP. > Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 > Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 > Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 > > That's NIC 2's IP. > Using 'iftop -i eno2 -L 5 -t' : > > dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct > 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/sCan you share the xml of this vm? You can find it in vdsm log, at the time you start the vm. Or you can do (on the host): # virsh virsh # list (username: vdsm at ovirt password: shibboleth) virsh # dumpxml vm-id> > Peak rate (sent/received/total): 281Mb 5.36Mb > 282Mb > Cumulative (sent/received/total): 1.96GB 14.6MB > 1.97GB > > gluster volume info gv1: > Options Reconfigured: > performance.write-behind-window-size: 4MB > performance.readdir-ahead: on > performance.cache-size: 1GB > performance.write-behind: off > > performance.write-behind: off didn't help. > Neither did any other changes I've tried. > > > There is no VM traffic on this VM right now except my test. > > > > On 02/10/2016 11:55 PM, Nir Soffer wrote: >> >> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N <ravishankar at redhat.com> >> wrote: >>> >>> +gluster-users >>> >>> Does disabling 'performance.write-behind' give a better throughput? >>> >>> >>> >>> On 02/10/2016 11:06 PM, Bill James wrote: >>>> >>>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar >>>> performance. >>>> Maybe my setup could use some adjustments? >>>> >>>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1. >>>> Each node has 8 spindles configured in 1 array which is split using LVM >>>> with one logical volume for system and one for gluster. >>>> They each have 4 NICs, >>>> NIC1 = ovirtmgmt >>>> NIC2 = gluster (1GbE) >> >> How do you ensure that gluster trafic is using this nic? >> >>>> NIC3 = VM traffic >> >> How do you ensure that vm trafic is using this nic? >> >>>> I tried with default glusterfs settings >> >> And did you find any difference? >> >>>> and also with: >>>> performance.cache-size: 1GB >>>> performance.readdir-ahead: on >>>> performance.write-behind-window-size: 4MB >>>> >>>> [root at ovirt3 test scripts]# gluster volume info gv1 >>>> >>>> Volume Name: gv1 >>>> Type: Replicate >>>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2 >>>> Status: Started >>>> Number of Bricks: 1 x 3 = 3 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>> Options Reconfigured: >>>> performance.cache-size: 1GB >>>> performance.readdir-ahead: on >>>> performance.write-behind-window-size: 4MB >>>> >>>> >>>> Using simple dd test on VM in ovirt: >>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct >> >> block size of 1G?! >> >> Try 1M (our default for storage operations) >> >>>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s >>>> >>>> Another VM not in ovirt using nfs: >>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct >>>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s >>>> >>>> >>>> Is that expected or is there a better way to set it up to get better >>>> performance? >> >> Adding Niels for advice. >> >>>> This email, its contents and .... >> >> Please avoid this, this is a public mailing list, everything you write >> here is public. >> >> Nir > > I'll have to look into how to remove this sig for this mailing list.... > > Cloud Services for Business www.j2.com > j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox > > > This email, its contents and attachments contain information from j2 Global, > Inc. and/or its affiliates which may be privileged, confidential or > otherwise protected from disclosure. The information is intended to be for > the addressee(s) only. If you are not an addressee, any disclosure, copy, > distribution, or use of the contents of this message is prohibited. If you > have received this email in error please notify the sender by reply e-mail > and delete the original message and any copies. (c) 2015 j2 Global, Inc. All > rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox > are registered trademarks of j2 Global, Inc. and its affiliates.
Bill James
2016-Feb-11 23:13 UTC
[Gluster-users] [ovirt-users] ovirt glusterfs performance
xml attached. On 02/11/2016 12:28 PM, Nir Soffer wrote:> On Thu, Feb 11, 2016 at 8:27 PM, Bill James <bill.james at j2.com> wrote: >> thank you for the reply. >> >> We setup gluster using the names associated with NIC 2 IP. >> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 >> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 >> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 >> >> That's NIC 2's IP. >> Using 'iftop -i eno2 -L 5 -t' : >> >> dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct >> 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s > Can you share the xml of this vm? You can find it in vdsm log, > at the time you start the vm. > > Or you can do (on the host): > > # virsh > virsh # list > (username: vdsm at ovirt password: shibboleth) > virsh # dumpxml vm-id > >> Peak rate (sent/received/total): 281Mb 5.36Mb >> 282Mb >> Cumulative (sent/received/total): 1.96GB 14.6MB >> 1.97GB >> >> gluster volume info gv1: >> Options Reconfigured: >> performance.write-behind-window-size: 4MB >> performance.readdir-ahead: on >> performance.cache-size: 1GB >> performance.write-behind: off >> >> performance.write-behind: off didn't help. >> Neither did any other changes I've tried. >> >> >> There is no VM traffic on this VM right now except my test. >> >> >> >> On 02/10/2016 11:55 PM, Nir Soffer wrote: >>> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N <ravishankar at redhat.com> >>> wrote: >>>> +gluster-users >>>> >>>> Does disabling 'performance.write-behind' give a better throughput? >>>> >>>> >>>> >>>> On 02/10/2016 11:06 PM, Bill James wrote: >>>>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar >>>>> performance. >>>>> Maybe my setup could use some adjustments? >>>>> >>>>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1. >>>>> Each node has 8 spindles configured in 1 array which is split using LVM >>>>> with one logical volume for system and one for gluster. >>>>> They each have 4 NICs, >>>>> NIC1 = ovirtmgmt >>>>> NIC2 = gluster (1GbE) >>> How do you ensure that gluster trafic is using this nic? >>> >>>>> NIC3 = VM traffic >>> How do you ensure that vm trafic is using this nic? >>> >>>>> I tried with default glusterfs settings >>> And did you find any difference? >>> >>>>> and also with: >>>>> performance.cache-size: 1GB >>>>> performance.readdir-ahead: on >>>>> performance.write-behind-window-size: 4MB >>>>> >>>>> [root at ovirt3 test scripts]# gluster volume info gv1 >>>>> >>>>> Volume Name: gv1 >>>>> Type: Replicate >>>>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2 >>>>> Status: Started >>>>> Number of Bricks: 1 x 3 = 3 >>>>> Transport-type: tcp >>>>> Bricks: >>>>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 >>>>> Options Reconfigured: >>>>> performance.cache-size: 1GB >>>>> performance.readdir-ahead: on >>>>> performance.write-behind-window-size: 4MB >>>>> >>>>> >>>>> Using simple dd test on VM in ovirt: >>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct >>> block size of 1G?! >>> >>> Try 1M (our default for storage operations) >>> >>>>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s >>>>> >>>>> Another VM not in ovirt using nfs: >>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct >>>>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s >>>>> >>>>> >>>>> Is that expected or is there a better way to set it up to get better >>>>> performance? >>> Adding Niels for advice. >>> >>>>> This email, its contents and .... >>> Please avoid this, this is a public mailing list, everything you write >>> here is public. >>> >>> Nir >> I'll have to look into how to remove this sig for this mailing list.... >> >> Cloud Services for Business www.j2.com >> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox >> >> >> This email, its contents and attachments contain information from j2 Global, >> Inc. and/or its affiliates which may be privileged, confidential or >> otherwise protected from disclosure. The information is intended to be for >> the addressee(s) only. If you are not an addressee, any disclosure, copy, >> distribution, or use of the contents of this message is prohibited. If you >> have received this email in error please notify the sender by reply e-mail >> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All >> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox >> are registered trademarks of j2 Global, Inc. and its affiliates.-------------- next part -------------- Please enter your authentication name: Please enter your password: <domain type='kvm' id='3'> <name>billjov1.test.j2noc.com</name> <uuid>c6aa56b4-f387-4a5b-84b6-a7db6ef89686</uuid> <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <ovirt:qos/> </metadata> <maxMemory slots='16' unit='KiB'>4294967296</maxMemory> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static' current='1'>16</vcpu> <cputune> <shares>1020</shares> </cputune> <numatune> <memory mode='interleave' nodeset='0-1'/> </numatune> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-2.1511.el7.centos.2.10</entry> <entry name='serial'>30343536-3138-5355-4533-323134593738</entry> <entry name='uuid'>c6aa56b4-f387-4a5b-84b6-a7db6ef89686</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.2.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <topology sockets='16' cores='1' threads='1'/> <numa> <cell id='0' cpus='0' memory='2097152' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <backingStore/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/00000001-0001-0001-0001-00000000009d/f11b0914-f067-4ee9-85e2-c9009be9ede5/images/eb0ccbf9-1ad8-4af8-944f-bc0d06981ed0/2c20816c-2559-46ad-acc3-300638126d6e'/> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb0ccbf9-1ad8-4af8-944f-bc0d06981ed0</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:16:01:51'/> <source bridge='QAVlan110'/> <target dev='vnet0'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/c6aa56b4-f387-4a5b-84b6-a7db6ef89686.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/c6aa56b4-f387-4a5b-84b6-a7db6ef89686.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='0' passwdValidTo='2016-02-11T22:59:07'> <listen type='address' address='0'/> </graphics> <video> <model type='cirrus' vram='32768' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='none'> <alias name='balloon0'/> </memballoon> </devices> </domain>