have set up a ZFS raidz with 4 samsung 500GB hard drives. It is extremely slow when I mount a ntfs partition and copy everything to zfs. Its like 100kb/sec or less. Why is that? When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low considering I have 4 new 500GB discs in raid? And when I copy from UFS to ZPool I get like 20MB/sec. Strange? Or normal results? Should I expect better performance? As of now, I am disappointed of ZFS. I used this card: http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm Express Community build 67 detected it automatically and everything worked. I inserted the card into a PCI slot, and it worked too. This message posted from opensolaris.org
On 7/6/07, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote:> have set up a ZFS raidz with 4 samsung 500GB hard drives. > > It is extremely slow when I mount a ntfs partition and copy everything to zfs. Its > like 100kb/sec or less. Why is that?How are you mounting said NTFS partition?> When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low > considering I have 4 new 500GB discs in raid? And when I copy from UFS to ZPool > I get like 20MB/sec. Strange? Or normal results? Should I expect better > performance? As of now, I am disappointed of ZFS.How fast is copying a file from ZFS to /dev/null? That would eliminate the UFS disk from the mix. Will
A couple of questions for you: (1) What OS are you running (Solaris, BSD, MacOS X, etc)? (2) What''s your config? In particular, are any of the partitions on the same disk? (3) Are you copying a few big files or lots of small ones? (4) Have you measured UFS-to-UFS and ZFS-to-ZFS performance on the same platform? That''d be useful data... Jeff On Fri, Jul 06, 2007 at 03:49:43PM -0400, Will Murnane wrote:> On 7/6/07, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote: > > have set up a ZFS raidz with 4 samsung 500GB hard drives. > > > > It is extremely slow when I mount a ntfs partition and copy everything to zfs. Its > > like 100kb/sec or less. Why is that? > How are you mounting said NTFS partition? > > > When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low > > considering I have 4 new 500GB discs in raid? And when I copy from UFS to ZPool > > I get like 20MB/sec. Strange? Or normal results? Should I expect better > > performance? As of now, I am disappointed of ZFS. > How fast is copying a file from ZFS to /dev/null? That would > eliminate the UFS disk from the mix. > > Will > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am using Solaris Express Community build 67 installed on a 40GB harddrive (UFS filesystem on Solaris), dual boot with Windows XP. I have a zfsraid with 4 samsung drives. It is a P4 at 2.4GHz and 1GB RAM. When I copy a 1.3G file from ZFSpool to ZFSpool the command "time cp file file2" gives this output: bash-3.00# time cp PAGEFILE.SYS pagefil3 real 0m49.719s user 0m0.004s sys 0m10.160s Which gives like 26MB/sec. When I copy that file from ZFS to UFS I get: real 0m35.091s user 0m0.004s sys 0m15.337s Which gives 37MB/sec. However, in each of the above scenarios, the "system monitor" shows that all RAM is used up and it begins to swap (the swap uses like 40MB). My system has never swapped before (Windows swaps immediately upon startup, ha!). The cpu utilization is like 50%. When I copy that file from UFS to UFS I get: real 1m36.315s user 0m0.003s sys 0m11.327s However, the CPU utilization is around 20% and RAM usage never exceeds 600MB - it doesnt use the swap. When I copy that file from ZFS to /dev/null I get this output: real 0m0.025s user 0m0.002s sys 0m0.007s which can''t be correct. Is it wrong of me to use "time cp fil fil2" when measuring disk performance? I mount NTFS with packages FSWfsmisc and FSWfspart, by Moinak Ghosh (and based on Martin Rosenau''s work and part of Moinak''s BeleniX work) This message posted from opensolaris.org
ZFS is a 128 bit file system. The performance on your 32-bit CPU will not be that good. ZFS was designed for a 64-bit CPU. Another GB of RAM might help. There are a bunch of post in the archive about 32-bit CPUs and performance. -Sean Orvar Korvar wrote:> I am using Solaris Express Community build 67 installed on a 40GB harddrive (UFS filesystem on Solaris), dual boot with Windows XP. I have a zfsraid with 4 samsung drives. It is a P4 at 2.4GHz and 1GB RAM. > > > > > When I copy a 1.3G file from ZFSpool to ZFSpool the command "time cp file file2" gives this output: > > bash-3.00# time cp PAGEFILE.SYS pagefil3 > real 0m49.719s > user 0m0.004s > sys 0m10.160s > > Which gives like 26MB/sec. > > > > > When I copy that file from ZFS to UFS I get: > real 0m35.091s > user 0m0.004s > sys 0m15.337s > > Which gives 37MB/sec. > > > However, in each of the above scenarios, the "system monitor" shows that all RAM is used up and it begins to swap (the swap uses like 40MB). My system has never swapped before (Windows swaps immediately upon startup, ha!). The cpu utilization is like 50%. > > > > > > When I copy that file from UFS to UFS I get: > real 1m36.315s > user 0m0.003s > sys 0m11.327s > However, the CPU utilization is around 20% and RAM usage never exceeds 600MB - it doesnt use the swap. > > > > > > > When I copy that file from ZFS to /dev/null I get this output: > real 0m0.025s > user 0m0.002s > sys 0m0.007s > which can''t be correct. Is it wrong of me to use "time cp fil fil2" when measuring disk performance? > > > > > > I mount NTFS with packages FSWfsmisc and FSWfspart, by Moinak Ghosh (and based on Martin Rosenau''s work and part of Moinak''s BeleniX work) > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
On Jul 7, 2007, at 06:14, Orvar Korvar wrote:> When I copy that file from ZFS to /dev/null I get this output: > real 0m0.025s > user 0m0.002s > sys 0m0.007s > which can''t be correct. Is it wrong of me to use "time cp fil fil2" > when measuring disk performance?well you''re reading and writing to the same disk so that''s going to affect performance, particularly as you''re seeking to different areas of the disk both for the files and for the uberblock updates .. in the above case it looks like the file is already cached (buffer cache being what is probably consuming most of your memory here) - so you''re just looking at a memory to memory transfer here .. if you want to see a simple write performance test many people use dd like so: # timex dd if=/dev/zero of=file bs=128k count=8192 which will give you a measure of an efficient 1GB file write of zeros .. or use a better opensource tool like iozone to get a better fix on single thread vs multi-thread, read/write mix, and block size differences for your given filesystem and storage layout jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070707/b5981058/attachment.html>
Orvar Korvar wrote:> When I copy that file from ZFS to /dev/null I get this output: > real 0m0.025s > user 0m0.002s > sys 0m0.007s > which can''t be correct. Is it wrong of me to use "time cp fil fil2" when measuring disk performance? ><replying to just this part of your message for now> cp opens the source file, mmaps it, opens the target file, and does a single write of the entire file contents. /dev/null''s write routine doesn''t actually do a copy into the kernel, it just returns success. As a result, the source file is never paged into memory. -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
I did that, and here are the results from the ZFS jury: bash-3.00$ timex dd if=/dev/zero of=file bs=128k count=8192 8192+0 records in 8192+0 records out real 19.40 user 0.01 sys 1.54 That is, 1GB created on 20sec = 50MB/sec. That is better, but still not good, as each drive of the four drives are capable of 50MB/sec. However, I can not achieve 50MB/sec in normal use. Strange. I will presume that the numbers get better when I upgrade to 64bit. This message posted from opensolaris.org
How are the drives connected? USB or SATA? Also, is this hardware raid or are you using raidz? If sata, what controller is being used? This message posted from opensolaris.org
Orvar, around 50 to 60 MB/sec I''ve seen when zwo disks are writing and around 100MB/s when reading round-robin. The limiting faktor has been the old PCI-Bus (*not* 32-Bit slot length) and in another test the 1-lane PCI-X bus. (Sil680/SIL3124-2 and SIL3132 Chip) So if you can see the difference being faktor 2 between reading and writing when using a 1:1 mirror setup, I would say, you hit the bottleneck of your PCI-Bus. Thomas On Sun, Jul 15, 2007 at 03:37:06AM -0700, Orvar Korvar wrote:> I did that, and here are the results from the ZFS jury: > > bash-3.00$ timex dd if=/dev/zero of=file bs=128k count=8192 > 8192+0 records in > 8192+0 records out > > real 19.40 > user 0.01 > sys 1.54 > > > > That is, 1GB created on 20sec = 50MB/sec. That is better, but still not good, as each drive of the four drives are capable of 50MB/sec. However, I can not achieve 50MB/sec in normal use. Strange. > > I will presume that the numbers get better when I upgrade to 64bit. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Mit freundlichen Gruessen, Thomas Wagner -- ********************************************************************* Thomas Wagner Tel: +49-(0)-711-720 98-131 Strategic Support Engineer Fax: +49-(0)-711-720 98-443 Global Customer Services Cell: +49-(0)-175-292 60 64 Sun Microsystems GmbH E-Mail: Thomas.Wagner at Sun.com Zettachring 10A, D-70567 Stuttgart http://www.sun.de Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering
I am using 4 SATA II drives, with this card (see the comments) which got detected by Solaris automatically: http://napobo3.blogspot.com/2006/04/sata2-under-b36.html I understand that my 32bit CPU is the limiting factor? But that seems a bit strange I think. A P4 at 2.4GHz should be able to squeese out ~5 GFlops, and one thinks that should be plenty for a file system as ZFS? Well, I hope to upgrade to Penryn later this year, and then I really hope it will be faster than 15MB/sec transfer rate. (I have a ASROCK P4V88 motherboard, which I have inserted the PCI-X SATA card into. The mobo doesnt have PCI-X, but the card reverts to ordinary PCI when inserted and it works right out of the box). This message posted from opensolaris.org
Orvar Korvar wrote:> I am using 4 SATA II drives, with this card (see the comments) which > got detected by Solaris automatically: > http://napobo3.blogspot.com/2006/04/sata2-under-b36.htmlOn there you posted:> However, I get very slow read/write perfomance. I have 4 samsung > 500GB each should reach 60mb/sec. Now, in total I have like > 20-40MB/sec for the whole zpool. And when I read from ntfs I get like > 100kb/sec. Why is that?So, ZFS performance is 200-400 times the performance of NTFS. Nice. ZFS gets its throughput speeds by does everything it can to run at cpu/bus/hba/spindle speeds. Increase the slowest of the four and things normally speed up. The CPU you have is plenty fast. The drives you have are plenty fast. Now evaluate the bus/hba speeds...> > I understand that my 32bit CPU is the limiting factor? But that seems > a bit strange I think. A P4 at 2.4GHz should be able to squeese out ~5 > GFlops, and one thinks that should be plenty for a file system as ZFS? > Well, I hope to upgrade to Penryn later this year, and then I really > hope it will be faster than 15MB/sec transfer rate. >When looking at bottlenecks, you must look at the full data path. Where I would look for the bottleneck is the HBA... Between the data path components (CPU, Bus, HBA, Disk), the CPU does not appear to be the weak link. If you CPU utilization (see prstat) is not above say 50%, then look elsewhere.> (I have a ASROCK P4V88 motherboard, which I have inserted the PCI-X > SATA card into. The mobo doesnt have PCI-X, but the card reverts to > ordinary PCI when inserted and it works right out of the box).Going from PCI-X speeds to 32bit 33Mhz PCI speeds is a drastic bottleneck, see "http://www.pcisig.com/specifications/pcix_20/". Replacing the Motherboard with one that is capable of passing data (CPU <-> bus <-> hba <-> disk) with a higher ***minimum*** speed throughout would help. Old lesson from Seymour Cray, see "http://en.wikipedia.org/wiki/Seymour_Cray". Shameless plug: Sun designs all the systems this way, see "http://www.sun.com/servers/entry/x2100/specs.xml#anchor1". Cheers, Joel. -------------- next part -------------- A non-text attachment was scrubbed... Name: Joel.Buckley.vcf Type: text/x-vcard Size: 403 bytes Desc: Card for Joel Buckley <Joel.Buckley at Sun.COM> URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070719/f8c6c7ec/attachment.vcf>