Nathan Kroenert
2008-Jun-15 04:30 UTC
[zfs-discuss] ZFS write / read speed and traps for beginners
Hey all - Just spent quite some time trying to work out why my 2 disk mirrored ZFS pool was running so slow, and found an interesting answer... System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate 500GB SATA-II 32mb cache disks. The SATA ports on the nfoce 750asli chipset don''t yet seem to be supported by the nv_sata driver (I''m only running nv_89 at the mo, though I''m not aware of new support going in just yet). I *can* get the driver to attach, but not to see any disks. interesting, but I digress... Anyhoo, - I''m stuck in IDE compatability mode for the moment. So - using plain dd to the zfs filesystem on said disk dd if=/dev/zero of=delete.me bs=65536 I could achieve only about 35-40MB/s write speed, whereas, if I dd to the slice directly, I can get around 90-95MB/s I tried using whole disks versus a slice and it made no appreciable difference. It turns out that when you are in IDE compatability mode, having two disks on the same ''controller'' (c# in solaris) behaves just like real IDE... Crap! Moving the second disk onto from c1 to c2 got be back to at least 50MB/s with higher peaks, up to 60/70MB/s. Also of note, on the gigabyte board (and I guess other nforce 750asli based chipsets) only 4 of the 6 SATA ports work when in IDE mode. Other thoughts on the Nforce 750a: - nge plumbs up OK and can send and ''see'' packets, but does not seem to know itself... In promiscuous mode, you can see returning icmp echo requests, but they don''t make it to the top of the stack. I had to use an e1000g in a PCI slot to get my networking working properly... - Onboard Video works, including compiz, but you need to create an xorg.conf and update the nvidia driver with the latest from the nvidia website Seems snappy enough. With 4 cores @ 2.2Ghz (phenom 9550) it''s looking like it''ll do what I wanted quite nicely. Later... Nathan. -- ////////////////////////////////////////////////////////////////// // Nathan Kroenert nathan.kroenert at sun.com // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax: +61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456 // // Melbourne 3004 Victoria Australia // //////////////////////////////////////////////////////////////////
Will Murnane
2008-Jun-15 04:43 UTC
[zfs-discuss] ZFS write / read speed and traps for beginners
On Sun, Jun 15, 2008 at 04:30, Nathan Kroenert <Nathan.Kroenert at sun.com> wrote:> So - using plain dd to the zfs filesystem on said disk > > dd if=/dev/zero of=delete.me bs=65536NB: dd from /dev/zero is a poor benchmark. Since the writes to disk are all zero, you may create a sparse file, or ZFS itself may omit the writes corresponding to the blocks that make up that file when it realizes they''re entirely composed of zeroes. I would suggest bonnie++ as a more suitable benchmark; it compiles easily on Solaris 10, or I can post a package if you''d like. Will
> It turns out that when you are in IDE compatability mode, having two > disks on the same ''controller'' (c# in solaris) behaves just like real > IDE... Crap!That is the second time I''ve seen solaris guess wrong and force what it thinks is right. Solaris will also limit the size of an ATA drive if the drive reports a certain ATA level, even if the drive size is not so limited. This message posted from opensolaris.org
Nathan Kroenert
2008-Jun-15 08:18 UTC
[zfs-discuss] ZFS write / read speed and traps for beginners
Further followup to this thread... After being beaten sufficiently with a clue-bat, it was determined that the nforce 750a could do ahci mode for it''s SATA stuff. I set it to ahci, and redid the devlinks etc and cranked it up as AHCI. I''m now regularly peaking at 100MB/s, though spending most of the time around 70MB/s. *much better* The lesson here is: when in ahci mode in the bios, *don''t* match that PCI-ID with the nv-sata driver. It''s not what you want. heh. *blush*. Once I removed the extra nv_sata entries I had added to the driver_aliases in my miniroot, all was good. On the NGE front, it turns out that solaris does not seem to like the ethernet address of the card. Trying to set it''s OWN ethernet address using ifconfig yielded this: # ifconfig nge0 ether 63:d0:b:7d:1d:0 ifconfig: dlpi_set_physaddr failed "nge0": DLSAP address in improper format or invalid ifconfig: failed setting mac address on nge0 using ifconfig nge0 ether 0:e:c:5b:54:45 worked just fine, and the interface now passes traffic and sees responses just fine. So, the workaround here is adding ether <a working ether address> in the hostname.nge0 I guess I''ll log a bug on that on Monday... Awesome. Now to work on audio... heh. Nathan. Nathan Kroenert wrote:> Hey all - > > Just spent quite some time trying to work out why my 2 disk mirrored ZFS > pool was running so slow, and found an interesting answer... > > System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate > 500GB SATA-II 32mb cache disks. > > The SATA ports on the nfoce 750asli chipset don''t yet seem to be > supported by the nv_sata driver (I''m only running nv_89 at the mo, > though I''m not aware of new support going in just yet). I *can* get the > driver to attach, but not to see any disks. interesting, but I digress... > > Anyhoo, - I''m stuck in IDE compatability mode for the moment. > > So - using plain dd to the zfs filesystem on said disk > > dd if=/dev/zero of=delete.me bs=65536 > > I could achieve only about 35-40MB/s write speed, whereas, if I dd to > the slice directly, I can get around 90-95MB/s > > I tried using whole disks versus a slice and it made no appreciable > difference. > > It turns out that when you are in IDE compatability mode, having two > disks on the same ''controller'' (c# in solaris) behaves just like real > IDE... Crap! > > Moving the second disk onto from c1 to c2 got be back to at least 50MB/s > with higher peaks, up to 60/70MB/s. > > Also of note, on the gigabyte board (and I guess other nforce 750asli > based chipsets) only 4 of the 6 SATA ports work when in IDE mode. > > Other thoughts on the Nforce 750a: > - nge plumbs up OK and can send and ''see'' packets, but does not seem > to know itself... In promiscuous mode, you can see returning icmp echo > requests, but they don''t make it to the top of the stack. > I had to use an e1000g in a PCI slot to get my networking working > properly... > - Onboard Video works, including compiz, but you need to create an > xorg.conf and update the nvidia driver with the latest from the nvidia > website > > Seems snappy enough. With 4 cores @ 2.2Ghz (phenom 9550) it''s looking > like it''ll do what I wanted quite nicely. > > Later... > > Nathan. > > >-- ////////////////////////////////////////////////////////////////// // Nathan Kroenert nathan.kroenert at sun.com // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax: +61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456 // // Melbourne 3004 Victoria Australia // //////////////////////////////////////////////////////////////////
Menno Lageman
2008-Jun-15 08:41 UTC
[zfs-discuss] ZFS write / read speed and traps for beginners
Nathan Kroenert wrote:> On the NGE front, it turns out that solaris does not seem to like the > ethernet address of the card. Trying to set it''s OWN ethernet address > using ifconfig yielded this: > # ifconfig nge0 ether 63:d0:b:7d:1d:0 > ifconfig: dlpi_set_physaddr failed "nge0": DLSAP address in improper > format or invalid > ifconfig: failed setting mac address on nge0 > > using > > ifconfig nge0 ether 0:e:c:5b:54:45 > > worked just fine, and the interface now passes traffic and sees > responses just fine. So, the workaround here is adding > ether <a working ether address> > in the hostname.nge0 > > I guess I''ll log a bug on that on Monday... >Nathan, I''d bet you''re being bitten by: 6658667 nge - ethernet address reversed on nForce 430 chipset on ASUS M2N motherboard Menno -- Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno
There is another benchmark tool named "iozone" (http://www.iozone.org/). Hope this help. Cesare On Sun, Jun 15, 2008 at 6:43 AM, Will Murnane <will.murnane at gmail.com> wrote:> On Sun, Jun 15, 2008 at 04:30, Nathan Kroenert <Nathan.Kroenert at sun.com> wrote: >> So - using plain dd to the zfs filesystem on said disk >> >> dd if=/dev/zero of=delete.me bs=65536 > NB: dd from /dev/zero is a poor benchmark. Since the writes to disk > are all zero, you may create a sparse file, or ZFS itself may omit the > writes corresponding to the blocks that make up that file when it > realizes they''re entirely composed of zeroes. I would suggest > bonnie++ as a more suitable benchmark; it compiles easily on Solaris > 10, or I can post a package if you''d like.