sb_mailings at mac.com
2007-Feb-13 20:27 UTC
[zfs-discuss] poor zfs performance on my home server
Hello, I switched my home server from Debian to Solaris. The main cause for this step was stability and ZFS. But now after the migration (why isn''t it possible to mount a linux fs on Solaris???) I make a few benchmarks and now I thought about swtching back to Debian. First of all the hardware layout of my home server: Mainboard: Asus A7V8X-X CPU: AthlonXP 2400+ Memory: 1.5GB Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB (SATA-1, c2d0,c2d1,c3d0,c3d1) SATA Controller: SIL3114 (downgraded to the IDE-FW) Solaris nv_54 =First of all I test the disk performance with dd: "dd bs=1M count=50 if=/dev/dsk/cxdxs1 of=/dev/null" c0d1 => 67.8 MB/s c1d0 => 63.0 MB/s c1d1 => 47.4 MB/s c2d0 => 54.5 MB/s c2d1 => 57.5 MB/s c3d0 => 54.5 MB/s c3d1 => 56.5 MB/s everything looks ok, c1d1 seems 10 MB/s slower than the rest but should be ok cause it''s an older disk. = Than I compiled the newest Version of bonnie++ and do some benchmarks first on an ZFS Mirror (/data/) created with the 250GB IDE disk: $ ./bonnie++ -d /data/ -s 4G -u root Using uid:0, gid:0. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP 4G 17832 25 17013 33 4630 12 21778 38 26839 11 66.0 2 Now on the ZFS RaidZ (/srv) single parity with the four sata discs: $ ./bonnie++ -d /srv/ -s 4G -u root Using uid:0, gid:0. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP 4G 21292 32 24171 53 9672 25 32420 55 53268 29 87.7 3 To have an reference now a bonnie++ benchmark on the single 160GB IDE disk: $ ./bonnie++ -d /export/home/ -s 4G -u root Using uid:0, gid:0. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP 4G 34727 58 34574 25 12193 12 37826 59 41262 17 161.3 1 = The last test I do was a rsync of an 290MB files between the different ZFS Pools and Filesystems: ZFS Mirror => single disc 7,074,316.87 bytes/sec ZFS Mirror => ZFS RaidZ 5,953,633.01 bytes/sec ZFS Mirror => ZFS Mirror 3,982,231.35 bytes/sec ZFS RaidZ => ZFS Mirror 10,549,419.89 bytes/sec ZFS RaidZ => single disc 16,251,809.03 bytes/sec ZFS RaidZ => ZFS RaidZ 8,714,738.17 bytes/sec single disc => ZFS RaidZ 18,221,725.27 bytes/sec single disc => ZFS Mirror 24,052,677.36 bytes/sec single disc => single disc 31,648,259.68 bytes/sec = Conclusion: In my oppinion it''s a realy bad performance except the single disk performance ;-) Is there a switch in ZFS where I could switch between lousy performance and really fast? zil_disable looks not like an opportunity. What'' s the best way to monitor the CPU load during the benchmarks, I don''t believe that the problem has something to do with CPU Power but it''s one point to check. regards, Sascha
sb_mailings at mac.com wrote:> Hello, > > I switched my home server from Debian to Solaris. The main cause for > this step was stability and ZFS. > But now after the migration (why isn''t it possible to mount a linux > fs on Solaris???) I make a few benchmarks > and now I thought about swtching back to Debian. First of all the > hardware layout of my home server: > > Mainboard: Asus A7V8X-X > CPU: AthlonXP 2400+ > Memory: 1.5GB > Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB > (SATA-1, c2d0,c2d1,c3d0,c3d1) > SATA Controller: SIL3114 (downgraded to the IDE-FW) > Solaris nv_54 > > Than I compiled the newest Version of bonnie++ and do some benchmarks > first on an ZFS Mirror (/data/) created with > the 250GB IDE disk: > > $ ./bonnie++ -d /data/ -s 4G -u root > Using uid:0, gid:0. > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % > CP /sec %CP > 4G 17832 25 17013 33 4630 12 21778 38 > 26839 11 66.0 2 >Looks like poor hardware, how was the pool built? Did you give ZFS the entire drive? On my nForce4 Athlon64 box with two 250G SATA drives, zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 c3d0 ONLINE 0 0 0 c4d0 ONLINE 0 0 0 Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP bester 4G 45036 21 47972 8 32570 5 83134 80 97646 12 253.9 0 dd from the mirror gives about 77MB/s Ian.
I''m also putting together a server on Solaris 10. My hardware so far: Mainboard: Tyan Tiger 230 S2507 Processors: 2 x Pentium III RAM: 512 MB PC133 ECC Hard drives: c0d0: ST380021A (80gb PATA) c0d1: ST325062 (250gb PATA) c1d1: ST325062 (250gb PATA) Not the fastest processor-wise... I have the two 250gb drives in a zpool mirror, and here are my bonnie++ results: # bonnie++ -s 1024 -u root Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP stitch 1G 38 98 20397 23 16270 26 95 96 73287 52 447.2 42 During the build I did some testing and noticed that if I have the two 250gb drives on the same channel (i.e. on the same cable), performance was effectively HALVED. Switching them to separate channels brings me much closer to "raw" read performance. I also have a SII3114-based card (SIIG SC-SA4R12-S2) that I''m about to make a 400GB mirror on. I''ll post results when it is done. PS: I belive you can mount ext2fs filesystems on Solaris: http://www.genunix.org/distributions/belenix_site/binfiles/README.FSWfsmisc.txt This message posted from opensolaris.org
Sascha Brechenmacher
2007-Feb-14 05:45 UTC
[zfs-discuss] poor zfs performance on my home server
Am 13.02.2007 um 22:46 schrieb Ian Collins:> sb_mailings at mac.com wrote: > >> Hello, >> >> I switched my home server from Debian to Solaris. The main cause for >> this step was stability and ZFS. >> But now after the migration (why isn''t it possible to mount a linux >> fs on Solaris???) I make a few benchmarks >> and now I thought about swtching back to Debian. First of all the >> hardware layout of my home server: >> >> Mainboard: Asus A7V8X-X >> CPU: AthlonXP 2400+ >> Memory: 1.5GB >> Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB >> (SATA-1, c2d0,c2d1,c3d0,c3d1) >> SATA Controller: SIL3114 (downgraded to the IDE-FW) >> Solaris nv_54 >> >> Than I compiled the newest Version of bonnie++ and do some benchmarks >> first on an ZFS Mirror (/data/) created with >> the 250GB IDE disk: >> >> $ ./bonnie++ -d /data/ -s 4G -u root >> Using uid:0, gid:0. >> Version 1.03 ------Sequential Output------ --Sequential Input- >> --Random- >> -Per Chr- --Block-- -Rewrite- -Per Chr- -- >> Block-- >> --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % >> CP /sec %CP >> 4G 17832 25 17013 33 4630 12 21778 38 >> 26839 11 66.0 2 >> > Looks like poor hardware, how was the pool built? Did you give ZFS > the > entire drive? > > On my nForce4 Athlon64 box with two 250G SATA drives, > > zpool status tank > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c3d0 ONLINE 0 0 0 > c4d0 ONLINE 0 0 0 > > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > bester 4G 45036 21 47972 8 32570 5 83134 80 97646 12 > 253.9 0 > > dd from the mirror gives about 77MB/s > > Ian. >I use the entire drive for the zpools: pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 c1d0 ONLINE 0 0 0 c1d1 ONLINE 0 0 0 errors: No known data errors pool: srv state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM srv ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c2d0 ONLINE 0 0 0 c2d1 ONLINE 0 0 0 c3d0 ONLINE 0 0 0 c3d1 ONLINE 0 0 0 how could I dd from the zpool''s, where is the blockdevice? sascha
Robert Milkowski
2007-Feb-14 08:35 UTC
[zfs-discuss] poor zfs performance on my home server
Hello Sascha, Wednesday, February 14, 2007, 6:45:30 AM, you wrote: SB> Am 13.02.2007 um 22:46 schrieb Ian Collins:>> sb_mailings at mac.com wrote: >> >>> Hello, >>> >>> I switched my home server from Debian to Solaris. The main cause for >>> this step was stability and ZFS. >>> But now after the migration (why isn''t it possible to mount a linux >>> fs on Solaris???) I make a few benchmarks >>> and now I thought about swtching back to Debian. First of all the >>> hardware layout of my home server: >>> >>> Mainboard: Asus A7V8X-X >>> CPU: AthlonXP 2400+ >>> Memory: 1.5GB >>> Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB >>> (SATA-1, c2d0,c2d1,c3d0,c3d1) >>> SATA Controller: SIL3114 (downgraded to the IDE-FW) >>> Solaris nv_54 >>> >>> Than I compiled the newest Version of bonnie++ and do some benchmarks >>> first on an ZFS Mirror (/data/) created with >>> the 250GB IDE disk: >>> >>> $ ./bonnie++ -d /data/ -s 4G -u root >>> Using uid:0, gid:0. >>> Version 1.03 ------Sequential Output------ --Sequential Input- >>> --Random- >>> -Per Chr- --Block-- -Rewrite- -Per Chr- -- >>> Block-- >>> --Seeks-- >>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % >>> CP /sec %CP >>> 4G 17832 25 17013 33 4630 12 21778 38 >>> 26839 11 66.0 2 >>> >> Looks like poor hardware, how was the pool built? Did you give ZFS >> the >> entire drive? >> >> On my nForce4 Athlon64 box with two 250G SATA drives, >> >> zpool status tank >> pool: tank >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> tank ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >> c3d0 ONLINE 0 0 0 >> c4d0 ONLINE 0 0 0 >> >> Version 1.03 ------Sequential Output------ --Sequential Input- >> --Random- >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- >> --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >> /sec %CP >> bester 4G 45036 21 47972 8 32570 5 83134 80 97646 12 >> 253.9 0 >> >> dd from the mirror gives about 77MB/s >> >> Ian. >>SB> I use the entire drive for the zpools: SB> pool: data SB> state: ONLINE SB> scrub: none requested SB> config: SB> NAME STATE READ WRITE CKSUM SB> data ONLINE 0 0 0 SB> mirror ONLINE 0 0 0 SB> c1d0 ONLINE 0 0 0 SB> c1d1 ONLINE 0 0 0 SB> errors: No known data errors SB> pool: srv SB> state: ONLINE SB> scrub: none requested SB> config: SB> NAME STATE READ WRITE CKSUM SB> srv ONLINE 0 0 0 SB> raidz1 ONLINE 0 0 0 SB> c2d0 ONLINE 0 0 0 SB> c2d1 ONLINE 0 0 0 SB> c3d0 ONLINE 0 0 0 SB> c3d1 ONLINE 0 0 0 SB> how could I dd from the zpool''s, where is the blockdevice? There''s no block device associated with a pool. However you can create a zvol (man zfs) or just create one large file. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Sascha Brechenmacher wrote:> > Am 13.02.2007 um 22:46 schrieb Ian Collins: > >> Looks like poor hardware, how was the pool built? Did you give ZFS the >> entire drive? >> >> On my nForce4 Athlon64 box with two 250G SATA drives, >> >> zpool status tank >> pool: tank >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> tank ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >> c3d0 ONLINE 0 0 0 >> c4d0 ONLINE 0 0 0 >> >> Version 1.03 ------Sequential Output------ --Sequential Input- >> --Random- >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- >> --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >> /sec %CP >> bester 4G 45036 21 47972 8 32570 5 83134 80 97646 12 >> 253.9 0 >> >> dd from the mirror gives about 77MB/s >> >> Ian. >> > > I use the entire drive for the zpools: > > pool: data > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c1d0 ONLINE 0 0 0 > c1d1 ONLINE 0 0 0 > > errors: No known data errors >So it realy looks like your hardware isn''t up to the job.> how could I dd from the zpool''s, where is the blockdevice?I just used a DVD ISO file. Ian