Hi, I have been doing some basic performance tests, and I am getting a big hit when I run UFS over a zvol, instead of directly using zfs. Any hints or explanations is very welcome. Here''s the scenario. The machine has 30G RAM, and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2, c3d0p2) that are mirrored and form a zpool. When using filebench with 20G files writing directly on the zfs filesystem, I get the following results: RandomWrite-8k: 0.8M/s SingleStreamWriteDirect1m: 50M/s MultiStreamWrite1m: 51M/s MultiStreamWriteDirect1m: 50M/s Pretty consistent and lovely. The 50M/s rate sounds pretty reasonable, while the random 0.8M/s is a bit too low ? All in all, things look ok to me though here The second step, is to create a 100G zvol, format it with UFS, then bench that under same conditions. Note that this zvol lives on the exact same zpool used previously. I get the following: RandomWrite-8k: 0.9M/s SingleStreamWriteDirect1m: 5.8M/s (??) MultiStreamWrite1m: 33M/s MultiStreamWriteDirect1m: 11M/s Obviously, there''s a major hit. Can someone please shed some light as to why this is happening ? If more info is required, I''d be happy to test some more ... This is all running on osol 2008.11 release. Note: I know ZFS autodisables disk-caches when running on partitions (is that slices, or fdisk partitions?!) Could this be causing what I''m seeing ? Thanks for the help Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081215/b98f73f3/attachment.html>
On Mon, 15 Dec 2008, Ahmed Kamal wrote:> > RandomWrite-8k: 0.9M/s > SingleStreamWriteDirect1m: 5.8M/s (??) > MultiStreamWrite1m: 33M/s > MultiStreamWriteDirect1m: 11M/s > > Obviously, there''s a major hit. Can someone please shed some light as to why > this is happening ? If more info is required, I''d be happy to test some more > ... This is all running on osol 2008.11 release.What blocksize did you specify when creating the zvol? Perhaps UFS will perform best if the zvol blocksize is similar to the UFS blocksize. For example, try testing with a zvol blocksize of 8k. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Well, I checked and it is 8k volblocksize 8K Any other suggestions how to begin to debug such issue ? On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Mon, 15 Dec 2008, Ahmed Kamal wrote: > >> >> RandomWrite-8k: 0.9M/s >> SingleStreamWriteDirect1m: 5.8M/s (??) >> MultiStreamWrite1m: 33M/s >> MultiStreamWriteDirect1m: 11M/s >> >> Obviously, there''s a major hit. Can someone please shed some light as to >> why >> this is happening ? If more info is required, I''d be happy to test some >> more >> ... This is all running on osol 2008.11 release. >> > > What blocksize did you specify when creating the zvol? Perhaps UFS will > perform best if the zvol blocksize is similar to the UFS blocksize. For > example, try testing with a zvol blocksize of 8k. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081215/55097920/attachment.html>
Le 15 d?c. 08 ? 01:13, Ahmed Kamal a ?crit :> Hi, > > I have been doing some basic performance tests, and I am getting a > big hit when I run UFS over a zvol, instead of directly using zfs. > Any hints or explanations is very welcome. Here''s the scenario. The > machine has 30G RAM, and two IDE disks attached. The disks have 2 > fdisk partitons (c4d0p2, c3d0p2) that are mirrored and form a zpool. > When using filebench with 20G files writing directly on the zfs > filesystem, I get the following results: > > RandomWrite-8k: 0.8M/s > SingleStreamWriteDirect1m: 50M/s > MultiStreamWrite1m: 51M/s > MultiStreamWriteDirect1m: 50M/s > > Pretty consistent and lovely. The 50M/s rate sounds pretty > reasonable, while the random 0.8M/s is a bit too low ? All in all, > things look ok to me though here > > The second step, is to create a 100G zvol, format it with UFS, then > bench that under same conditions. Note that this zvol lives on the > exact same zpool used previously. I get the following: > > RandomWrite-8k: 0.9M/s > SingleStreamWriteDirect1m: 5.8M/s (??) > MultiStreamWrite1m: 33M/s > MultiStreamWriteDirect1m: 11M/s > > Obviously, there''s a major hit. Can someone please shed some light > as to why this is happening ? If more info is required, I''d be happy > to test some more ... This is all running on osol 2008.11 release. >UFS/DIO is not great at allocating files. It does so 1 x 8K page at a time. That might well be the cause of the bad direct I/O results. Multistream write might be the result of sw overhead using the 2 layers. For the randomwrite case, if the writes are not page aligned, then they straddle 2 pages which will be read (before the write). So we wait for a disk I/O before every 8K write. Normal performance here. Aligning the writes on 8K boundary and setting the ZFS recordsize would help a lot here.> Note: I know ZFS autodisables disk-caches when running on partitions > (is that slices, or fdisk partitions?!) Could this be causing what > I''m seeing ? >Nope. ZFS can at time enables disk caches, but will never disables them. If you wanted to run UFS on the same drives (not zvol), be sure to disable the write caches by hand after having destroyed the pool. -r> Thanks for the help > Regards > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ahmed Kamal writes: > Hi, > > I have been doing some basic performance tests, and I am getting a big hit > when I run UFS over a zvol, instead of directly using zfs. Any hints or > explanations is very welcome. Here''s the scenario. The machine has 30G RAM, > and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2, > c3d0p2) that are mirrored and form a zpool. When using filebench with 20G > files writing directly on the zfs filesystem, I get the following results: > > RandomWrite-8k: 0.8M/s > SingleStreamWriteDirect1m: 50M/s > MultiStreamWrite1m: 51M/s > MultiStreamWriteDirect1m: 50M/s > > Pretty consistent and lovely. The 50M/s rate sounds pretty reasonable, while > the random 0.8M/s is a bit too low ? All in all, things look ok to me though > here > > The second step, is to create a 100G zvol, format it with UFS, then bench > that under same conditions. Note that this zvol lives on the exact same > zpool used previously. I get the following: > > RandomWrite-8k: 0.9M/s > SingleStreamWriteDirect1m: 5.8M/s (??) > MultiStreamWrite1m: 33M/s > MultiStreamWriteDirect1m: 11M/s > The straight ZVOL case might have unfairly benefited from 6770534 - zvols do not observe O_SYNC semantic http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6770534 Commited to fix in the next ONNV build (106). The UFS over ZVOL would not since the strategy entry points to zvol are not impacted by the bug. -r > Obviously, there''s a major hit. Can someone please shed some light as to why > this is happening ? If more info is required, I''d be happy to test some more > ... This is all running on osol 2008.11 release. > > Note: I know ZFS autodisables disk-caches when running on partitions (is > that slices, or fdisk partitions?!) Could this be causing what I''m seeing ? > > Thanks for the help > Regards > <div dir="ltr">Hi,<br><br>I have been doing some basic performance tests, and I am getting a big hit when I run UFS over a zvol, instead of directly using zfs. Any hints or explanations is very welcome. Here's the scenario. The machine has 30G RAM, and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2, c3d0p2) that are mirrored and form a zpool. When using filebench with 20G files writing directly on the zfs filesystem, I get the following results:<br> > <br>RandomWrite-8k: 0.8M/s<br>SingleStreamWriteDirect1m: 50M/s<br>MultiStreamWrite1m: 51M/s<br>MultiStreamWriteDirect1m: 50M/s<br><br>Pretty consistent and lovely. The 50M/s rate sounds pretty reasonable, while the random 0.8M/s is a bit too low ? All in all, things look ok to me though here<br> > <br>The second step, is to create a 100G zvol, format it with UFS, then bench that under same conditions. Note that this zvol lives on the exact same zpool used previously. I get the following:<br><br>RandomWrite-8k: 0.9M/s<br> > > SingleStreamWriteDirect1m: 5.8M/s (??)<br> > MultiStreamWrite1m: 33M/s<br> > MultiStreamWriteDirect1m: 11M/s<br><br>Obviously, there's a major hit. Can someone please shed some light as to why this is happening ? If more info is required, I'd be happy to test some more ... This is all running on osol 2008.11 release.<br> > <br>Note: I know ZFS autodisables disk-caches when running on partitions (is that slices, or fdisk partitions?!) Could this be causing what I'm seeing ?<br><br>Thanks for the help<br>Regards<br></div> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss