i''ve setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log device. to see how well it works, i ran bonnie++, but never saw any io''s on the log device (using iostat -nxce) . pool status is good - no issues or errors. any ideas? jmh -- This message posted from opensolaris.org
On Tue, 30 Jun 2009, John Hoogerdijk wrote:> i''ve setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log > device. to see how well it works, i ran bonnie++, but never saw any > io''s on the log device (using iostat -nxce) . pool status is good - no > issues or errors. any ideas?Try using direct i/o (the -D flag) in bonnie++. You''ll need at least version 1.03e. Regards, markm
> On Tue, 30 Jun 2009, John Hoogerdijk wrote: > > > i''ve setup a RAIDZ2 pool with 5 SATA drives and > added a 32GB SSD log > > device. to see how well it works, i ran bonnie++, > but never saw any > > io''s on the log device (using iostat -nxce) . pool > status is good - no > > issues or errors. any ideas? > > Try using direct i/o (the -D flag) in bonnie++. > You''ll need at least > ersion 1.03e. > > > Regards, > markm > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ssso i guess there is some porting to do - no O_DIRECT in solaris... anyone have bonnie++ 1.03e ported already? -- This message posted from opensolaris.org
On Wed, 1 Jul 2009, Mark J Musante wrote:> On Tue, 30 Jun 2009, John Hoogerdijk wrote: > >> i''ve setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log >> device. to see how well it works, i ran bonnie++, but never saw any io''s >> on the log device (using iostat -nxce) . pool status is good - no issues >> or errors. any ideas? > > Try using direct i/o (the -D flag) in bonnie++. You''ll need at least version > 1.03e.If this -D flag uses the Solaris directio() function, then it will do nothing for ZFS. It only works for UFS and NFS. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Mark J Musante wrote:> On Tue, 30 Jun 2009, John Hoogerdijk wrote: > >> i''ve setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log >> device. to see how well it works, i ran bonnie++, but never saw any >> io''s on the log device (using iostat -nxce) . pool status is good - >> no issues or errors. any ideas? > > Try using direct i/o (the -D flag) in bonnie++. You''ll need at least > version 1.03e.Or you could export the filesystem via NFS and run any file creation/write workload on an NFS client; that should generate a large amount of log activity thanks to the synchronous writes that the NFS server must issue to honour its obligations to the NFS client. -- Jason.Ozolins at anu.edu.au ANU Supercomputer Facility Leonard Huxley Bldg 56, Mills Road Ph: +61 2 6125 5449 Australian National University Fax: +61 2 6125 8199 Canberra, ACT, 0200, Australia
John Hoogerdijk wrote:> so i guess there is some porting to do - no O_DIRECT in solaris... > > anyone have bonnie++ 1.03e ported already?For your purposes, couldn''t you replace O_DIRECT with O_SYNC as a hack? If you''re trying to benchmark the log device, the important thing is to generate synchronous writes, and the zero-copy aspect of O_DIRECT is less important, no? -- Jason.Ozolins at anu.edu.au ANU Supercomputer Facility Leonard Huxley Bldg 56, Mills Road Ph: +61 2 6125 5449 Australian National University Fax: +61 2 6125 8199 Canberra, ACT, 0200, Australia
Jason Ozolins wrote:> John Hoogerdijk wrote: >> so i guess there is some porting to do - no O_DIRECT in solaris... >> anyone have bonnie++ 1.03e ported already? > > For your purposes, couldn''t you replace O_DIRECT with O_SYNC as a > hack? If you''re trying to benchmark the log device, the important > thing is to generate synchronous writes, and the zero-copy aspect of > O_DIRECT is less important, no?Yes, this is the right thing to do, you want to test synchronous writes. -- richard
> Mark J Musante wrote: > > On Tue, 30 Jun 2009, John Hoogerdijk wrote: > > > >> i''ve setup a RAIDZ2 pool with 5 SATA drives and > added a 32GB SSD log > >> device. to see how well it works, i ran bonnie++, > but never saw any > >> io''s on the log device (using iostat -nxce) . > pool status is good - > > no issues or errors. any ideas? > > > > Try using direct i/o (the -D flag) in bonnie++. > You''ll need at least > version 1.03e. > Or you could export the filesystem via NFS and run > any file creation/write > workload on an NFS client; that should generate a > large amount of log > activity thanks to the synchronous writes that the > NFS server must issue > to honour its obligations to the NFS client. > > -- > Jason.Ozolins at anu.edu.au ANU Supercomputer > Facility > Leonard Huxley Bldg > 56, Mills Road > Ph: +61 2 6125 5449 Australian National > University > Fax: +61 2 6125 8199 Canberra, ACT, 0200, > Australia > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ssnfs works, but i''ll need a faster network ... i also compiled bonnie++ 1.03e with O_SYNC instead of O_DIRECT - there must be more too it as there is no activity on the SSD. just out of curiosity, why would asynch writes also use the SDD? Wouldn''t that help to reduce memory pressure? It''s not difficult to stall applications whilst do large whacks of IO like coping large iso images and perhaps the SSD could help to reduce the memory pressure. I''m testing on an 8gb box with a RAID2Z pool consisting of 5 sata disks, and although I''m getting pretty good throughput, it does tend to take over the box. jmh -- This message posted from opensolaris.org
I''ve also suggested this in the past, but I think the end result was that it was pointless: If you have sync writes, the client does not get a reply until the data is on disk. So a SSD drive makes a huge difference. If you have async writes, the client gets a reply as soon as the server has the data, before it gets to disk. So the disk speed makes no difference to response time. -- This message posted from opensolaris.org
> I''ve also suggested this in the past, but I think the > end result was that it was pointless: > > If you have sync writes, the client does not get a > reply until the data is on disk. So a SSD drive > makes a huge difference. > > If you have async writes, the client gets a reply as > soon as the server has the data, before it gets to > disk. So the disk speed makes no difference to > response time.agreed - but if the client stalls because of memory pressure anyway, perhaps there is some benefit in using SDD to relieve this. For the most part, this likely only affects corner cases where significant IO creates pressure. There seems to be enough anecdotal cases that talk about application stalls that may warrant another look at it. Maybe combining ARC limits and incorporating SDD can help to smooth out IO impacts and reduce application stalling. jmh -- This message posted from opensolaris.org
True, but the ZIL is designed to only hold a small amount of data anyway, so I''m not sure the cost of the ZIL device would be less than the equivalent RAM for the sizes we''re talking about. There may be a few cases that would benefit, but I don''t think there are enough that Sun would put the effort into making the change. -- This message posted from opensolaris.org