I had previously undertaken a benchmark that pits "out of box" performance of UFS via SVM, VxFS and ZFS but was waylaid due to some outstanding availability issues in ZFS. These have been taken care of, and I am once again undertaking this challenge on behalf of my customer. The idea behind this benchmark is to show a. How ZFS might displace the current commercial volume and file system management applications being used. b. The learning curve of moving from current volume management products to ZFS. c. Performance differences across the different volume management products. VDBench is the test bed of choice as this has been accepted by the customer as a telling and accurate indicator of performance. The last time I attempted this test it had been suggested that VDBench is not appropriate to testing ZFS, I cannot see that being a problem, VDBench is a tool - if it highlights performance problems, then I would think it is a very effective tool so that we might better be able to fix those deficiencies. Now, to the heart of my problem! The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as JBOD) through a brocade switch, and I am using Solaris 10 11/06. For Veritas, I am using Storage Foundation Suite 5.0. The systems were jumpstarted to the same configuration before testing a different volume management software to ensure there were no artifacts remaining from any previous test. I present my vdbench definition below for your information: sd=FS,lun=/pool/TESTFILE,size=10g,threads=8 wd=DWR,sd=FS,rdpct=100,seekpct=80 wd=ETL,sd=FS,rdpct=0, seekpct=80 wd=OLT,sd=FS,rdpct=70, seekpct=80 rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k) As you can see, it is fairly straight forward and I take the average of the three runs in each of ETL, OLT and DWR workloads. As an aside, I am also performing this test for various file system block sizes as applicable as well. I then ran this workload against a Raid-5 LUN created and mounted in each of the different file system types. Please note that one of the test criteria is that the associated volume management software create the Raid-5 LUN, not the disk subsystem. 1. UFS via SVM # metainit d20 -r d1 . d8 # newfs /dev/md/dsk/d20 # mount /dev/md/dsk/d20 /pool 2. ZFS # zfs create pool raidz d1 . d8 3. VxFS - Veritas SF5.0 # vxdisk init SUN35100_0 .. SUN35100_7 # vxdg init testdg SUN35100_0 . # vxassist -g testdg make pool 418283m layout=raid5 Now to my problem - Performance! Given the test as defined above, VxFS absolutely blows the doors off of both UFS and ZFS during write operations. For example, during a single test on an 8k file system block, I have the following average IO Rates: ETL OLTP DWR UFS 390.00 1298.44 23173.60 VxFS 15323.10 27329.04 22889.91 ZFS 2122.23 7299.36 22940.63 If you look at these numbers percentage wise, with VxFS being set to 100%, then UFS run''s at 2.5% the speed, and ZFS at 13.8% the speed, for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are 100% reads, no writing, performance is similar with UFS at 101.2% and ZFS at 100.2% the speed of VxFS. cid:image002.png at 01C78027.99B515D0 Given this performance problems, then quite obviously VxFS quite rightly deserves to be the file system of choice, even with a cost premium. If anyone has any insight into why I am seeing, consistently, these types of very disappointing numbers I would very much appreciate your comments. The numbers are very disturbing as it is indicating that write performance has issues. Please take into account that this benchmark is performed on non-tuned file systems specifically at the customers request as this is likely the way they would be deployed in their production environments. Maybe I should be configuring my workload differently for VDBench - if so, does anyone have any ideas on this? Unfortunately, I have weeks worth of test data to back up these numbers and would enjoy the opportunity to discuss these results in detail to discover if my methodology has problems or if it is the file system. Thanks for your time. <mailto:Tony.Galway at sun.com> Tony.Galway at sun.com 416.801.6779 You can always tell who the Newfoundlanders are in Heaven. They''re the ones who want to go home -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070416/59059ee9/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 24252 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070416/59059ee9/attachment.png>
Did you measure CPU utilization by any chance during the tests? Its T2000 and CPU cores are quite slow on this box hence might be a bottleneck. just a guess. On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:> I had previously undertaken a benchmark that pits ?out of box? > performance of UFS via SVM, VxFS and ZFS but was waylaid due to some > outstanding availability issues in ZFS. These have been taken care of, > and I am once again undertaking this challenge on behalf of my > customer. The idea behind this benchmark is to show > > > > a. How ZFS might displace the current commercial volume and file > system management applications being used. > > b. The learning curve of moving from current volume management > products to ZFS. > > c. Performance differences across the different volume management > products. > > > > VDBench is the test bed of choice as this has been accepted by the > customer as a telling and accurate indicator of performance. The last > time I attempted this test it had been suggested that VDBench is not > appropriate to testing ZFS, I cannot see that being a problem, VDBench > is a tool ? if it highlights performance problems, then I would think > it is a very effective tool so that we might better be able to fix > those deficiencies. > > > > Now, to the heart of my problem! > > > > The test hardware is a T2000 connected to a 12 disk SE3510 (presenting > as JBOD) through a brocade switch, and I am using Solaris 10 11/06. > For Veritas, I am using Storage Foundation Suite 5.0. The systems were > jumpstarted to the same configuration before testing a different > volume management software to ensure there were no artifacts remaining > from any previous test. > > > > I present my vdbench definition below for your information: > > > > sd=FS,lun=/pool/TESTFILE,size=10g,threads=8 > > wd=DWR,sd=FS,rdpct=100,seekpct=80 > > wd=ETL,sd=FS,rdpct=0, seekpct=80 > > wd=OLT,sd=FS,rdpct=70, seekpct=80 > > rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k) > > > > As you can see, it is fairly straight forward and I take the average > of the three runs in each of ETL, OLT and DWR workloads. As an aside, > I am also performing this test for various file system block sizes as > applicable as well. > > > > I then ran this workload against a Raid-5 LUN created and mounted in > each of the different file system types. Please note that one of the > test criteria is that the associated volume management software create > the Raid-5 LUN, not the disk subsystem. > > > > 1. UFS via SVM > > # metainit d20 ?r d1 ? d8 > > # newfs /dev/md/dsk/d20 > > # mount /dev/md/dsk/d20 /pool > > > > 2. ZFS > > # zfs create pool raidz d1 ? d8 > > > > 3. VxFS ? Veritas SF5.0 > > # vxdisk init SUN35100_0 ?. SUN35100_7 > > # vxdg init testdg SUN35100_0 ? > > # vxassist ?g testdg make pool 418283m layout=raid5 > > > > > > Now to my problem ? Performance! Given the test as defined above, > VxFS absolutely blows the doors off of both UFS and ZFS during write > operations. For example, during a single test on an 8k file system > block, I have the following average IO Rates: > > > > > > > ETL > > > OLTP > > > DWR > > > UFS > > > 390.00 > > > 1298.44 > > > 23173.60 > > > VxFS > > > 15323.10 > > > 27329.04 > > > 22889.91 > > > ZFS > > > 2122.23 > > > 7299.36 > > > 22940.63 > > > > > > > > If you look at these numbers percentage wise, with VxFS being set to > 100%, then UFS run?s at 2.5% the speed, and ZFS at 13.8% the speed, > for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are > 100% reads, no writing, performance is similar with UFS at 101.2% and > ZFS at 100.2% the speed of VxFS. > > > > cid:image002.png at 01C78027.99B515D0 > > > > > > Given this performance problems, then quite obviously VxFS quite > rightly deserves to be the file system of choice, even with a cost > premium. If anyone has any insight into why I am seeing, consistently, > these types of very disappointing numbers I would very much appreciate > your comments. The numbers are very disturbing as it is indicating > that write performance has issues. Please take into account that this > benchmark is performed on non-tuned file systems specifically at the > customers request as this is likely the way they would be deployed in > their production environments. > > > > Maybe I should be configuring my workload differently for VDBench ? if > so, does anyone have any ideas on this? > > > > Unfortunately, I have weeks worth of test data to back up these > numbers and would enjoy the opportunity to discuss these results in > detail to discover if my methodology has problems or if it is the file > system. > > > > Thanks for your time. > > > > Tony.Galway at sun.com > > 416.801.6779 > > > > You can always tell who the Newfoundlanders are in Heaven. They''re > the ones who want to go home > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Erast
On April 16, 2007 1:10:41 PM -0400 Tony Galway <tony.galway at sun.com> wrote:> I had previously undertaken a benchmark that pits "out of box" performance...> The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as...> Now to my problem - Performance! Given the test as defined above, VxFS > absolutely blows the doors off of both UFS and ZFS during write > operations. For example, during a single test on an 8k file system block, > I have the following average IO Rates:"out of the box" performance of zfs on T2000 hardware might suffer. <http://blogs.sun.com/realneel/entry/zfs_and_databases> is the only link I could find, but there is another article somewhere about tuning for t2000, related to PCI on the t2000, ie it is t2000-specific. -frank
On 4/16/07, Frank Cusack <fcusack at fcusack.com> wrote:> but there is another article somewhere > about tuning for t2000, related to PCI on the t2000, ie it is > t2000-specific.This one?? http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for Rayson> > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On April 16, 2007 10:51:41 AM -0700 Frank Cusack <fcusack at fcusack.com> wrote:> On April 16, 2007 1:10:41 PM -0400 Tony Galway <tony.galway at sun.com> > wrote: >> I had previously undertaken a benchmark that pits "out of box" >> performance > ... >> The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as > ... >> Now to my problem - Performance! Given the test as defined above, VxFS >> absolutely blows the doors off of both UFS and ZFS during write >> operations. For example, during a single test on an 8k file system block, >> I have the following average IO Rates: > > "out of the box" performance of zfs on T2000 hardware might suffer. > > <http://blogs.sun.com/realneel/entry/zfs_and_databases> is the > only link I could find, but there is another article somewhere > about tuning for t2000, related to PCI on the t2000, ie it is > t2000-specific.Found it. (as always, of course, right AFTER posting) <http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for>
The volume is 7+1. I have created the volume using both the default (DRL) as well as ''nolog'' to turn it off, both with similar performance. On the advice of Henk, after he had looked over my data, he is notice that the veritas test seems to be almost entirely using file system cache. I will retest with a much larger file to defeat this cache (I do not want to modify my mount options). If this then shows similar performance (I will also retest with ZFS with the same file size) then the question will probably have more to do with how ZFS handles file system caching. -Tony -----Original Message----- From: Richard.Elling at Sun.COM [mailto:Richard.Elling at Sun.COM] Sent: April 16, 2007 2:16 PM To: Tony.Galway at Sun.COM Subject: Re: [zfs-discuss] Testing of UFS, VxFS and ZFS Is the VxVM volume 8-wide? It is not clear from your creation commands. -- richard Tony Galway wrote:> > > I had previously undertaken a benchmark that pits "out of box" > performance of UFS via SVM, VxFS and ZFS but was waylaid due to some > outstanding availability issues in ZFS. These have been taken care of, > and I am once again undertaking this challenge on behalf of my customer. > The idea behind this benchmark is to show > > > > a. How ZFS might displace the current commercial volume and file > system management applications being used. > > b. The learning curve of moving from current volume management > products to ZFS. > > c. Performance differences across the different volume management > products. > > > > VDBench is the test bed of choice as this has been accepted by the > customer as a telling and accurate indicator of performance. The last > time I attempted this test it had been suggested that VDBench is not > appropriate to testing ZFS, I cannot see that being a problem, VDBench > is a tool - if it highlights performance problems, then I would think it > is a very effective tool so that we might better be able to fix those > deficiencies. > > > > Now, to the heart of my problem! > > > > The test hardware is a T2000 connected to a 12 disk SE3510 (presenting > as JBOD) through a brocade switch, and I am using Solaris 10 11/06. For > Veritas, I am using Storage Foundation Suite 5.0. The systems were > jumpstarted to the same configuration before testing a different volume > management software to ensure there were no artifacts remaining from any > previous test. > > > > I present my vdbench definition below for your information: > > > > sd=FS,lun=/pool/TESTFILE,size=10g,threads=8 > > wd=DWR,sd=FS,rdpct=100,seekpct=80 > > wd=ETL,sd=FS,rdpct=0, seekpct=80 > > wd=OLT,sd=FS,rdpct=70, seekpct=80 > >rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > >rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8 k,16k,32k,64k,128k)> > > > > As you can see, it is fairly straight forward and I take the average of > the three runs in each of ETL, OLT and DWR workloads. As an aside, I am > also performing this test for various file system block sizes as > applicable as well. > > > > I then ran this workload against a Raid-5 LUN created and mounted in > each of the different file system types. Please note that one of the > test criteria is that the associated volume management software create > the Raid-5 LUN, not the disk subsystem. > > > > 1. UFS via SVM > > # metainit d20 -r d1 . d8 > > # newfs /dev/md/dsk/d20 > > # mount /dev/md/dsk/d20 /pool > > > > 2. ZFS > > # zfs create pool raidz d1 . d8 > > > > 3. VxFS - Veritas SF5.0 > > # vxdisk init SUN35100_0 .. SUN35100_7 > > # vxdg init testdg SUN35100_0 . > > # vxassist -g testdg make pool 418283m layout=raid5 > > > > > > Now to my problem - Performance! Given the test as defined above, VxFS > absolutely blows the doors off of both UFS and ZFS during write > operations. For example, during a single test on an 8k file system > block, I have the following average IO Rates: > > > > > > > > *ETL * > > > > *OLTP * > > > > *DWR * > > *UFS * > > > > 390.00 > > > > 1298.44 > > > > 23173.60 > > *VxFS * > > > > 15323.10 > > > > 27329.04 > > > > 22889.91 > > *ZFS * > > > > 2122.23 > > > > 7299.36 > > > > 22940.63 > > > > > > If you look at these numbers percentage wise, with VxFS being set to > 100%, then UFS run''s at 2.5% the speed, and ZFS at 13.8% the speed, for > OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are 100% > reads, no writing, performance is similar with UFS at 101.2% and ZFS at > 100.2% the speed of VxFS. > > > > cid:image002.png at 01C78027.99B515D0 > > > > > > Given this performance problems, then quite obviously VxFS quite rightly > deserves to be the file system of choice, even with a cost premium. If > anyone has any insight into why I am seeing, consistently, these types > of very disappointing numbers I would very much appreciate your > comments. The numbers are very disturbing as it is indicating that write > performance has issues. Please take into account that this benchmark is > performed on non-tuned file systems specifically at the customers > request as this is likely the way they would be deployed in their > production environments. > > > > Maybe I should be configuring my workload differently for VDBench - if > so, does anyone have any ideas on this? > > > > Unfortunately, I have weeks worth of test data to back up these numbers > and would enjoy the opportunity to discuss these results in detail to > discover if my methodology has problems or if it is the file system. > > > > Thanks for your time. > > > > Tony.Galway at sun.com <mailto:Tony.Galway at sun.com> > > 416.801.6779 > > > > You can always tell who the Newfoundlanders are in Heaven. They''re > the ones who want to go home > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why are you using software-based RAID 5/RAIDZ for the tests? I didn''t think this was a common setup in cases where file system performance was the primary consideration. This message posted from opensolaris.org
An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070417/49673cbc/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 24252 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070417/49673cbc/attachment.png>
Tony Galway wrote:> > I had previously undertaken a benchmark that pits ?out of box? > performance of UFS via SVM, VxFS and ZFS but was waylaid due to some > outstanding availability issues in ZFS. These have been taken care of, > and I am once again undertaking this challenge on behalf of my > customer. The idea behind this benchmark is to show > > a. How ZFS might displace the current commercial volume and file > system management applications being used. > > b. The learning curve of moving from current volume management > products to ZFS. > > c. Performance differences across the different volume management > products. > > VDBench is the test bed of choice as this has been accepted by the > customer as a telling and accurate indicator of performance. The last > time I attempted this test it had been suggested that VDBench is not > appropriate to testing ZFS, I cannot see that being a problem, VDBench > is a tool ? if it highlights performance problems, then I would think > it is a very effective tool so that we might better be able to fix > those deficiencies. >First, VDBench is a Sun internal and partner only tool so you might not get much response on this list. Second, VDBench is great for testing raw block i/o devices. I think a tool that does file system testing will get you better data.
filebench for example On 4/17/07, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> Tony Galway wrote: > > > > I had previously undertaken a benchmark that pits "out of box" > > performance of UFS via SVM, VxFS and ZFS but was waylaid due to some > > outstanding availability issues in ZFS. These have been taken care of, > > and I am once again undertaking this challenge on behalf of my > > customer. The idea behind this benchmark is to show > > > > a. How ZFS might displace the current commercial volume and file > > system management applications being used. > > > > b. The learning curve of moving from current volume management > > products to ZFS. > > > > c. Performance differences across the different volume management > > products. > > > > VDBench is the test bed of choice as this has been accepted by the > > customer as a telling and accurate indicator of performance. The last > > time I attempted this test it had been suggested that VDBench is not > > appropriate to testing ZFS, I cannot see that being a problem, VDBench > > is a tool ? if it highlights performance problems, then I would think > > it is a very effective tool so that we might better be able to fix > > those deficiencies. > > > > First, VDBench is a Sun internal and partner only tool so you might not > get much response on this list. > Second, VDBench is great for testing raw block i/o devices. I think a > tool that does file system testing will get you better data. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
> Second, VDBench is great for testing raw block i/o devices. > I think a tool that does file system testing will get you > better data.OTOH, shouldn''t a tool that measures raw device performance be reasonable to reflect Oracle performance when configured for raw devices? I don''t know the current "best practice" for Oracle, but a lot of DBAs still use raw devices instead of files for their table spaces.... Anton This message posted from opensolaris.org
Anton B. Rang wrote:>> Second, VDBench is great for testing raw block i/o devices. >> I think a tool that does file system testing will get you >> better data. >> > > OTOH, shouldn''t a tool that measures raw device performance be reasonable to reflect Oracle performance when configured for raw devices? I don''t know the current "best practice" for Oracle, but a lot of DBAs still use raw devices instead of files for their table spaces.... >Sure, once you charchterize what the performance of the oracle DB us. (Read% vs. Write%, i/o size, etc.) VDBench is great for testing the raw device with whatever workload you want to test. Most of the Oracle folks I talk to mention they use fs these days ... but that isn''t scientific by any stretch.
> # zfs create pool raidz d1 ? d8Surely you didn''t create the zfs pool on top of SVM metadevices? If so, that''s not useful; the zfs pool should be on top of raw devices. Also, because VxFS is extent based (if I understand correctly), not unlike how MVS manages disk space I might add, _it ought_ to blow the doors off of everything for sequential reads, and probably sequential writes too, depending on the write size. OTOH, if a lot of files are created and deleted, it needs to be defragmented (although I think it can do that automatically; but there''s still at least some overhead while a defrag is running). Finally, don''t forget complexity. VxVM+VxFS is quite capable, but it doesn''t always recover from problems as gracefully as one might hope, and it can be a real bear to get untangled sometimes (not to mention moderately tedious just to set up). SVM, although not as capable as VxVM, is much easier IMO. And zfs on top of raw devices is about as easy as it gets. That may not matter _now_, when whoever sets these up is still around; but when their replacement has to troubleshoot or rebuild, it might help to have something that''s as easy as possible. This message posted from opensolaris.org
To give you fine people an update, it seems that the reason for the skewed results shown earlier is due to Veritas'' ability to take advantage of all the free memory available on my server. My test system has 32G of Ram, and my test data file is 10G. Basically, Veritas was able to cache the entire data file. On the suggestion that I use a larger data file, I have increased the file to 100G and have re-run my Veritas tests with an 8k File Block and 8k Request Block size. In doing this, I have run into a different issue, which I have now resolved. That being - my server became fully unresponsive during testing! Basically, Veritas allowed itself to consume all the memory of my server, all 32G, and then when it had used all of its memory, it looked for more and hung the server due to resource contention. To alleviate this problem, I tuned the Veritas file system using the write_throttle tunable. I first set this to be 99.5% of my Ram, then 50% and now I am running at about 256Mb (32651 pages). The reason I had decreased the values down to the current (32651) page size is that, while the server would be responsive, my test volume would not be so. It took 35 minutes for the paging to stop when I had set the write_throttle to 50% of Ram, and only after that paging completed did the test resume. At the current value, the test will continue for a period, then flush, continue, etc. Given my current configuration, I now see an IO rate of 756 versus my prior 27,329!!! I am continuing my testing, and will re-test UFS and ZFS with the larger file size as well to properly discover the differences. Also, in subsequent testing, I will also test with a full un-buffered Veritas & UFS volume to compare with ZFS, which may be a more reasonable test due to ZFS''s copy-on-write architecture. I will send out the full results when they are compiled. This message posted from opensolaris.org