Matt Cowger
2010-Mar-09 01:57 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
Hi Everyone, It looks like I''ve got something weird going with zfs performance on a ramdisk....ZFS is performing not even a 3rd of what UFS is doing. Short version: Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren''t swapping Create zpool on it (zpool create ram....) Change zfs options to turn off checksumming (don''t want it or need it), atime, compression, 4K block size (this is the applications native blocksize) etc. Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm read). Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the UFS forcedirectio (no point in using a buffer cache memory for something that''s already in memory) Measure IOPs performance using iozone: iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g With the ZFS filesystem I get around: ZFS (seq write) 42360 (seq read)31010 (random read)20953 (random write)32525 Not SOO bad, but here''s UFS: UFS (seq write )42853 (seq read) 100761 (random read) 100471 (random write) 101141 For all tests besides the seq write, UFS utterly destroys ZFS. I''m curious if anyone has any clever ideas on why this huge disparity in performance exists. At the end of the day, my application will run on either filesystem, it just surprises me how much worse ZFS performs in this (admittedly edge case) scenario. --M -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100308/06f430fd/attachment.html>
ольга крыжановская
2010-Mar-09 02:04 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
Does iozone use mmap() for IO? Olga On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger <mcowger at salesforce.com> wrote:> Hi Everyone, > > > > It looks like I?ve got something weird going with zfs performance on a > ramdisk?.ZFS is performing not even a 3rd of what UFS is doing. > > > > Short version: > > > > Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t swapping > > Create zpool on it (zpool create ram?.) > > Change zfs options to turn off checksumming (don?t want it or need it), > atime, compression, 4K block size (this is the applications native > blocksize) etc. > > Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm > read). > > > > Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the > UFS forcedirectio (no point in using a buffer cache memory for something > that?s already in memory) > > > > Measure IOPs performance using iozone: > > > > iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g > > > > With the ZFS filesystem I get around: > > ZFS > (seq write) 42360 (seq read)31010 (random > read)20953 (random write)32525 > > Not SOO bad, but here?s UFS: > > UFS > (seq write )42853 (seq read) 100761 (random read) > 100471 (random write) 101141 > > > > For all tests besides the seq write, UFS utterly destroys ZFS. > > > > I?m curious if anyone has any clever ideas on why this huge disparity in > performance exists. At the end of the day, my application will run on > either filesystem, it just surprises me how much worse ZFS performs in this > (admittedly edge case) scenario. > > > > --M > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- , _ _ , { \/`o;====- Olga Kryzhanovska -====;o`\/ } .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` /\/\ /\/\ `--` `--`
Bill Sommerfeld
2010-Mar-09 02:31 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On 03/08/10 17:57, Matt Cowger wrote:> Change zfs options to turn off checksumming (don''t want it or need it), atime, compression, 4K block size (this is the applications native blocksize) etc.even when you disable checksums and compression through the zfs command, zfs will still compress and checksum metadata. the evil tuning guide describes an unstable interface to turn off metadata compression, but I don''t see anything in there for metadata checksums. if you have an actual need for an in-memory filesystem, will tmpfs fit the bill? - Bill
Matt Cowger
2010-Mar-09 02:31 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
It can, but doesn''t in the command line shown below. M On Mar 8, 2010, at 6:04 PM, "????? ????????????" <olga.kryzh anovska at gmail.com> wrote:> Does iozone use mmap() for IO? > > Olga > > On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger <mcowger at salesforce.com> > wrote: >> Hi Everyone, >> >> >> >> It looks like I?ve got something weird going with zfs performance >> on a >> ramdisk?.ZFS is performing not even a 3rd of what UFS is doing. >> >> >> >> Short version: >> >> >> >> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t >> swapping >> >> Create zpool on it (zpool create ram?.) >> >> Change zfs options to turn off checksumming (don?t want it or need >> it), >> atime, compression, 4K block size (this is the applications native >> blocksize) etc. >> >> Run a simple iozone benchmark (seq. write, seq. read, rndm write, >> rndm >> read). >> >> >> >> Same deal for UFS, replacing the ZFS stuff with newfs stuff and >> mounting the >> UFS forcedirectio (no point in using a buffer cache memory for >> something >> that?s already in memory) >> >> >> >> Measure IOPs performance using iozone: >> >> >> >> iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g >> >> >> >> With the ZFS filesystem I get around: >> >> ZFS >> (seq write) 42360 (seq read)31010 (random >> read)20953 (random write)32525 >> >> Not SOO bad, but here?s UFS: >> >> UFS >> (seq write )42853 (seq read) 100761 (random >> read) >> 100471 (random write) 101141 >> >> >> >> For all tests besides the seq write, UFS utterly destroys ZFS. >> >> >> >> I?m curious if anyone has any clever ideas on why this huge dispar >> ity in >> performance exists. At the end of the day, my application will run >> on >> either filesystem, it just surprises me how much worse ZFS performs >> in this >> (admittedly edge case) scenario. >> >> >> >> --M >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > > > > -- > , _ _ , > { \/`o;====- Olga Kryzhanovska -====;o`\/ } > .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. > `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` > /\/\ /\/\ > `--` `--`
Richard Elling
2010-Mar-09 02:31 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 8, 2010, at 5:57 PM, Matt Cowger wrote:> Hi Everyone, > > It looks like I?ve got something weird going with zfs performance on a ramdisk?.ZFS is performing not even a 3rd of what UFS is doing. > > Short version: > > Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t swapping > Create zpool on it (zpool create ram?.) > Change zfs options to turn off checksumming (don?t want it or need it), atime, compression, 4K block size (this is the applications native blocksize) etc. > Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm read). > > Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the UFS forcedirectio (no point in using a buffer cache memory for something that?s already in memory)Did you also set primarycache=none? -- richard> > Measure IOPs performance using iozone: > > iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g > > With the ZFS filesystem I get around: > ZFS (seq write) 42360 (seq read)31010 (random read)20953 (random write)32525 > Not SOO bad, but here?s UFS: > UFS (seq write )42853 (seq read) 100761 (random read) 100471 (random write) 101141 > > For all tests besides the seq write, UFS utterly destroys ZFS. > > I?m curious if anyone has any clever ideas on why this huge disparity in performance exists. At the end of the day, my application will run on either filesystem, it just surprises me how much worse ZFS performs in this (admittedly edge case) scenario. > > --M > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Matt Cowger
2010-Mar-09 03:36 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 8, 2010, at 6:31 PM, Richard Elling wrote:>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the UFS forcedirectio (no point in using a buffer cache memory for something that?s already in memory) > > Did you also set primarycache=none? > -- richardGood suggestion - that actually made it significantly worse - down to less than 5000 IOPs (or 5% of the performance of UFS).... --
Edward Ned Harvey
2010-Mar-09 03:53 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
I don''t have an answer to this question, but I can say, I''ve seen a similar surprising result. I ran iozone on various raid configurations of spindle disks . and on a ramdisk. I was surprised to see the ramdisk is only about 50% to 200% faster than the next best competitor in each category. . I don''t have any good explanation for that, but I didn''t question it too hard. I accepted the results for what they are . the ramdisk performs surprisingly poorly for some unknown reason. From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Matt Cowger Sent: Monday, March 08, 2010 8:58 PM To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop) Hi Everyone, It looks like I''ve got something weird going with zfs performance on a ramdisk..ZFS is performing not even a 3rd of what UFS is doing. Short version: Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren''t swapping Create zpool on it (zpool create ram..) Change zfs options to turn off checksumming (don''t want it or need it), atime, compression, 4K block size (this is the applications native blocksize) etc. Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm read). Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the UFS forcedirectio (no point in using a buffer cache memory for something that''s already in memory) Measure IOPs performance using iozone: iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g With the ZFS filesystem I get around: ZFS (seq write) 42360 (seq read)31010 (random read)20953 (random write)32525 Not SOO bad, but here''s UFS: UFS (seq write )42853 (seq read) 100761 (random read) 100471 (random write) 101141 For all tests besides the seq write, UFS utterly destroys ZFS. I''m curious if anyone has any clever ideas on why this huge disparity in performance exists. At the end of the day, my application will run on either filesystem, it just surprises me how much worse ZFS performs in this (admittedly edge case) scenario. --M -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100308/2f4d3601/attachment.html>
Matt Cowger
2010-Mar-09 03:56 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 8, 2010, at 6:31 PM, Bill Sommerfeld wrote:> > if you have an actual need for an in-memory filesystem, will tmpfs fit > the bill? > > - BillVery good point bill - just ran this test and started to get the numbers I was expecting (1.3 GB/s throughput, 250K+ IOPs) If we do go this way, this is an excellent options)
ольга крыжановская
2010-Mar-09 04:46 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
tmpfs lacks features like quota and NFSv4 ACL support. May not be the best choice if such features are required. Olga On Tue, Mar 9, 2010 at 3:31 AM, Bill Sommerfeld <sommerfeld at sun.com> wrote:> On 03/08/10 17:57, Matt Cowger wrote: >> >> Change zfs options to turn off checksumming (don''t want it or need it), >> atime, compression, 4K block size (this is the applications native >> blocksize) etc. > > even when you disable checksums and compression through the zfs command, zfs > will still compress and checksum metadata. > > the evil tuning guide describes an unstable interface to turn off metadata > compression, but I don''t see anything in there for metadata checksums. > > if you have an actual need for an in-memory filesystem, will tmpfs fit the > bill? > > - Bill > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- , _ _ , { \/`o;====- Olga Kryzhanovska -====;o`\/ } .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` /\/\ /\/\ `--` `--`
Ross Walker
2010-Mar-09 14:22 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 8, 2010, at 11:46 PM, ????? ???????????? <olga.kryzh anovska at gmail.com> wrote:> tmpfs lacks features like quota and NFSv4 ACL support. May not be the > best choice if such features are required.True, but if the OP is looking for those features they are more then unlikely looking for an in-memory file system. This would be more for something like temp databases in a RDBMS or a cache of some sort. -Ross
Matt Cowger
2010-Mar-09 17:40 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
Ross is correct - advanced OS features are not required here - just the ability to store a file - don?t even need unix style permissions.... -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Ross Walker Sent: Tuesday, March 09, 2010 6:23 AM To: ????? ???????????? Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop) On Mar 8, 2010, at 11:46 PM, ????? ???????????? <olga.kryzh anovska at gmail.com> wrote:> tmpfs lacks features like quota and NFSv4 ACL support. May not be the > best choice if such features are required.True, but if the OP is looking for those features they are more then unlikely looking for an in-memory file system. This would be more for something like temp databases in a RDBMS or a cache of some sort. -Ross _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling
2010-Mar-09 18:23 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 9, 2010, at 9:40 AM, Matt Cowger wrote:> Ross is correct - advanced OS features are not required here - just the ability to store a file - don?t even need unix style permissions....KISS. Just use tmpfs, though you might also consider limiting its size. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Roch Bourbonnais
2010-Mar-09 18:42 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table would probably turn over if you go to 16K zfs records and 16K reads/writes form the application. Next step for you is to figure how much reads/writes IOPS do you expect to take in the real workloads and whether or not the filesystem portion will represent a significant drain of CPU resource. -r Le 8 mars 10 ? 17:57, Matt Cowger a ?crit :> Hi Everyone, > > It looks like I?ve got something weird going with zfs performance on > a ramdisk?.ZFS is performing not even a 3rd of what UFS is doing. > > Short version: > > Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t > swapping > Create zpool on it (zpool create ram?.) > Change zfs options to turn off checksumming (don?t want it or need > it), atime, compression, 4K block size (this is the applications > native blocksize) etc. > Run a simple iozone benchmark (seq. write, seq. read, rndm write, > rndm read). > > Same deal for UFS, replacing the ZFS stuff with newfs stuff and > mounting the UFS forcedirectio (no point in using a buffer cache > memory for something that?s already in memory) > > Measure IOPs performance using iozone: > > iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g > > With the ZFS filesystem I get around: > ZFS > (seq > write) 42360 (seq read)31010 (random > read)20953 (random write)32525 > Not SOO bad, but here?s UFS: > UFS > (seq > write )42853 (seq read) 100761 (random read) > 100471 (random write) 101141 > > For all tests besides the seq write, UFS utterly destroys ZFS. > > I?m curious if anyone has any clever ideas on why this huge > disparity in performance exists. At the end of the day, my > application will run on either filesystem, it just surprises me how > much worse ZFS performs in this (admittedly edge case) scenario. > > --M > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2431 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100309/7868687b/attachment.bin>
Matt Cowger
2010-Mar-09 18:55 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
That''s a very good point - in this particular case, there is no option to change the blocksize for the application. On 3/9/10 10:42 AM, "Roch Bourbonnais" <Roch.Bourbonnais at Sun.COM> wrote:> > I think This is highlighting that there is extra CPU requirement to > manage small blocks in ZFS. > The table would probably turn over if you go to 16K zfs records and > 16K reads/writes form the application. > > Next step for you is to figure how much reads/writes IOPS do you > expect to take in the real workloads and whether or not the filesystem > portion > will represent a significant drain of CPU resource. > > -r > > > Le 8 mars 10 ? 17:57, Matt Cowger a ?crit : > >> Hi Everyone, >> >> It looks like I?ve got something weird going with zfs performance on >> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing. >> >> Short version: >> >> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t >> swapping >> Create zpool on it (zpool create ramS.) >> Change zfs options to turn off checksumming (don?t want it or need >> it), atime, compression, 4K block size (this is the applications >> native blocksize) etc. >> Run a simple iozone benchmark (seq. write, seq. read, rndm write, >> rndm read). >> >> Same deal for UFS, replacing the ZFS stuff with newfs stuff and >> mounting the UFS forcedirectio (no point in using a buffer cache >> memory for something that?s already in memory) >> >> Measure IOPs performance using iozone: >> >> iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g >> >> With the ZFS filesystem I get around: >> ZFS >> (seq >> write) 42360 (seq read)31010 (random >> read)20953 (random write)32525 >> Not SOO bad, but here?s UFS: >> UFS >> (seq >> write )42853 (seq read) 100761 (random read) >> 100471 (random write) 101141 >> >> For all tests besides the seq write, UFS utterly destroys ZFS. >> >> I?m curious if anyone has any clever ideas on why this huge >> disparity in performance exists. At the end of the day, my >> application will run on either filesystem, it just surprises me how >> much worse ZFS performs in this (admittedly edge case) scenario. >> >> --M >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Ross Walker
2010-Mar-09 23:53 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais <Roch.Bourbonnais at Sun.COM> wrote:> > I think This is highlighting that there is extra CPU requirement to > manage small blocks in ZFS. > The table would probably turn over if you go to 16K zfs records and > 16K reads/writes form the application. > > Next step for you is to figure how much reads/writes IOPS do you > expect to take in the real workloads and whether or not the > filesystem portion > will represent a significant drain of CPU resource.I think it highlights more the problem of ARC vs ramdisk, or specifically ZFS on ramdisk while ARC is fighting with ramdisk for memory. It is a wonder it didn''t deadlock. If I were to put a ZFS file system on a ramdisk, I would limit the size of the ramdisk and ARC so both, plus the kernel fit nicely in memory with room to spare for user apps. -Ross
ольга крыжановская
2010-Mar-09 23:57 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
Which IO library do you use? If you use stdio you could use the libast stdio implementation which allows to set the block size via environment variables. Olga On Tue, Mar 9, 2010 at 7:55 PM, Matt Cowger <mcowger at salesforce.com> wrote:> That''s a very good point - in this particular case, there is no option to > change the blocksize for the application. > > > On 3/9/10 10:42 AM, "Roch Bourbonnais" <Roch.Bourbonnais at Sun.COM> wrote: > >> >> I think This is highlighting that there is extra CPU requirement to >> manage small blocks in ZFS. >> The table would probably turn over if you go to 16K zfs records and >> 16K reads/writes form the application. >> >> Next step for you is to figure how much reads/writes IOPS do you >> expect to take in the real workloads and whether or not the filesystem >> portion >> will represent a significant drain of CPU resource. >> >> -r >> >> >> Le 8 mars 10 ? 17:57, Matt Cowger a ?crit : >> >>> Hi Everyone, >>> >>> It looks like I?ve got something weird going with zfs performance on >>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing. >>> >>> Short version: >>> >>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t >>> swapping >>> Create zpool on it (zpool create ramS.) >>> Change zfs options to turn off checksumming (don?t want it or need >>> it), atime, compression, 4K block size (this is the applications >>> native blocksize) etc. >>> Run a simple iozone benchmark (seq. write, seq. read, rndm write, >>> rndm read). >>> >>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and >>> mounting the UFS forcedirectio (no point in using a buffer cache >>> memory for something that?s already in memory) >>> >>> Measure IOPs performance using iozone: >>> >>> iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g >>> >>> With the ZFS filesystem I get around: >>> ZFS >>> (seq >>> write) 42360 (seq read)31010 (random >>> read)20953 (random write)32525 >>> Not SOO bad, but here?s UFS: >>> UFS >>> (seq >>> write )42853 (seq read) 100761 (random read) >>> 100471 (random write) 101141 >>> >>> For all tests besides the seq write, UFS utterly destroys ZFS. >>> >>> I?m curious if anyone has any clever ideas on why this huge >>> disparity in performance exists. At the end of the day, my >>> application will run on either filesystem, it just surprises me how >>> much worse ZFS performs in this (admittedly edge case) scenario. >>> >>> --M >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- , _ _ , { \/`o;====- Olga Kryzhanovska -====;o`\/ } .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` /\/\ /\/\ `--` `--`
ольга крыжановская
2010-Mar-09 23:58 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
Could you retest it with mmap() used? Olga 2010/3/9 Matt Cowger <mcowger at salesforce.com>:> It can, but doesn''t in the command line shown below. > > M > > > > On Mar 8, 2010, at 6:04 PM, "????? ????????????" <olga.kryzh > anovska at gmail.com> wrote: > >> Does iozone use mmap() for IO? >> >> Olga >> >> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger <mcowger at salesforce.com> >> wrote: >>> Hi Everyone, >>> >>> >>> >>> It looks like I''ve got something weird going with zfs performance >>> on a >>> ramdisk....ZFS is performing not even a 3rd of what UFS is doing. >>> >>> >>> >>> Short version: >>> >>> >>> >>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren''t >>> swapping >>> >>> Create zpool on it (zpool create ram....) >>> >>> Change zfs options to turn off checksumming (don''t want it or need >>> it), >>> atime, compression, 4K block size (this is the applications native >>> blocksize) etc. >>> >>> Run a simple iozone benchmark (seq. write, seq. read, rndm write, >>> rndm >>> read). >>> >>> >>> >>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and >>> mounting the >>> UFS forcedirectio (no point in using a buffer cache memory for >>> something >>> that''s already in memory) >>> >>> >>> >>> Measure IOPs performance using iozone: >>> >>> >>> >>> iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g >>> >>> >>> >>> With the ZFS filesystem I get around: >>> >>> ZFS >>> (seq write) 42360 (seq read)31010 (random >>> read)20953 (random write)32525 >>> >>> Not SOO bad, but here''s UFS: >>> >>> UFS >>> (seq write )42853 (seq read) 100761 (random >>> read) >>> 100471 (random write) 101141 >>> >>> >>> >>> For all tests besides the seq write, UFS utterly destroys ZFS. >>> >>> >>> >>> I''m curious if anyone has any clever ideas on why this huge dispar >>> ity in >>> performance exists. At the end of the day, my application will run >>> on >>> either filesystem, it just surprises me how much worse ZFS performs >>> in this >>> (admittedly edge case) scenario. >>> >>> >>> >>> --M >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >> >> >> >> -- >> , _ _ , >> { \/`o;====- Olga Kryzhanovska -====;o`\/ } >> .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. >> `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` >> /\/\ /\/\ >> `--` `--` >-- , _ _ , { \/`o;====- Olga Kryzhanovska -====;o`\/ } .----''-/`-/ olga.kryzhanovska at gmail.com \-`\-''----. `''-..-| / Solaris/BSD//C/C++ programmer \ |-..-''` /\/\ /\/\ `--` `--`
Matt Cowger
2010-Mar-10 01:05 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
This is a good point, and something that I tried. I limited the ARC to 1GB and 4GB (both well within the memory footprint of the system even with the ramdisk).....equally poor results....this doesn''t feel like ARC righting with locked memory pages. --M -----Original Message----- From: Ross Walker [mailto:rswwalker at gmail.com] Sent: Tuesday, March 09, 2010 3:53 PM To: Roch Bourbonnais Cc: Matt Cowger; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop) On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais <Roch.Bourbonnais at Sun.COM> wrote:> > I think This is highlighting that there is extra CPU requirement to > manage small blocks in ZFS. > The table would probably turn over if you go to 16K zfs records and > 16K reads/writes form the application. > > Next step for you is to figure how much reads/writes IOPS do you > expect to take in the real workloads and whether or not the > filesystem portion > will represent a significant drain of CPU resource.I think it highlights more the problem of ARC vs ramdisk, or specifically ZFS on ramdisk while ARC is fighting with ramdisk for memory. It is a wonder it didn''t deadlock. If I were to put a ZFS file system on a ramdisk, I would limit the size of the ramdisk and ARC so both, plus the kernel fit nicely in memory with room to spare for user apps. -Ross
Kyle McDonald
2010-Apr-25 03:09 UTC
[zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)
On 3/9/2010 1:55 PM, Matt Cowger wrote:> That''s a very good point - in this particular case, there is no option to > change the blocksize for the application. > >I have no way of guessing the effects it would have, but is there a reason that the filesystem blocks can''t be a multiple of the application block size? I mean 4 4kb app blocks to 1 16kb fs block sounds like it might be a decent comprimise to me. Decent enough to make it worth testing anyway. -Kyle> On 3/9/10 10:42 AM, "Roch Bourbonnais" <Roch.Bourbonnais at Sun.COM> wrote: > > >> I think This is highlighting that there is extra CPU requirement to >> manage small blocks in ZFS. >> The table would probably turn over if you go to 16K zfs records and >> 16K reads/writes form the application. >> >> Next step for you is to figure how much reads/writes IOPS do you >> expect to take in the real workloads and whether or not the filesystem >> portion >> will represent a significant drain of CPU resource. >> >> -r >> >> >> Le 8 mars 10 ? 17:57, Matt Cowger a ?crit : >> >> >>> Hi Everyone, >>> >>> It looks like I?ve got something weird going with zfs performance on >>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing. >>> >>> Short version: >>> >>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren?t >>> swapping >>> Create zpool on it (zpool create ramS.) >>> Change zfs options to turn off checksumming (don?t want it or need >>> it), atime, compression, 4K block size (this is the applications >>> native blocksize) etc. >>> Run a simple iozone benchmark (seq. write, seq. read, rndm write, >>> rndm read). >>> >>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and >>> mounting the UFS forcedirectio (no point in using a buffer cache >>> memory for something that?s already in memory) >>> >>> Measure IOPs performance using iozone: >>> >>> iozone -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g >>> >>> With the ZFS filesystem I get around: >>> ZFS >>> (seq >>> write) 42360 (seq read)31010 (random >>> read)20953 (random write)32525 >>> Not SOO bad, but here?s UFS: >>> UFS >>> (seq >>> write )42853 (seq read) 100761 (random read) >>> 100471 (random write) 101141 >>> >>> For all tests besides the seq write, UFS utterly destroys ZFS. >>> >>> I?m curious if anyone has any clever ideas on why this huge >>> disparity in performance exists. At the end of the day, my >>> application will run on either filesystem, it just surprises me how >>> much worse ZFS performs in this (admittedly edge case) scenario. >>> >>> --M >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >