Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to ZFS to make it a little faster for writes. Oracle is using 8k block size, can we match zfs block size to oracle without destroying the data? $ zpool status zpraid0_e2 pool: zpraid0_e2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zpraid0_e2 ONLINE 0 0 0 c3t60060480000190101941533030453434d0 ONLINE 0 0 0 errors: No known data errors Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090303/5a3e7915/attachment.html>
Vahid Moghaddasi wrote:> Hi, > > I am wondering if there is a guideline on how to configure ZFS on a > server with Oracle database?Start with the Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide> We are experiencing some slowness on writes to ZFS filesystem. It take > about 530ms to write a 2k data.This seems unusual, unless the EMC is mismatched wrt how they may have implemented cache flush. The issues around this are described in the Evil Tuning Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes -- richard> We are running Solaris 10 u5 127127-11 and the back-end storage is a > RAID5 EMC EMX. > This is a small database with about 18gb storage allocated. > Is there a tunable parameters that we can apply to ZFS to make it a > little faster for writes. > Oracle is using 8k block size, can we match zfs block size to oracle > without destroying the data? > $ zpool status zpraid0_e2 > pool: zpraid0_e2 > state: ONLINE > scrub: none requested > config: > NAME STATE READ WRITE > CKSUM > zpraid0_e2 ONLINE 0 > 0 0 > c3t60060480000190101941533030453434d0 ONLINE 0 > 0 0 > errors: No known data errors > Thanks,
On Mar 3, 2009, at 20:51, Richard Elling wrote:> This seems unusual, unless the EMC is mismatched wrt how they may have > implemented cache flush. The issues around this are described in > the Evil > Tuning Guide > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_FlushesUnder the 5/08 and snv_72 note, the following text appears:> The sd and ssd drivers should properly handle the SYNC_NV bit, so no > changes should be needed.I''m assuming this relates to: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690 So the caching-flushing scenario shouldn''t be a problem with newer Solaris releases on higher-end arrays (assuming they support SBC-2''s SYNV_NV).
Thank you all for reply. I will do the recommendation and see if there is any change in performance. I had seen the documents in the link but I needed to make sure that we can not do anything to improve the performance before going to storage guys. Thanks again, Vahid. On Tue, Mar 3, 2009 at 9:44 PM, David Magda <dmagda at ee.ryerson.ca> wrote:> > On Mar 3, 2009, at 20:51, Richard Elling wrote: > > This seems unusual, unless the EMC is mismatched wrt how they may have >> implemented cache flush. The issues around this are described in the Evil >> Tuning Guide >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes >> > > Under the 5/08 and snv_72 note, the following text appears: > > The sd and ssd drivers should properly handle the SYNC_NV bit, so no >> changes should be needed. >> > > I''m assuming this relates to: > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690 > > So the caching-flushing scenario shouldn''t be a problem with newer Solaris > releases on higher-end arrays (assuming they support SBC-2''s SYNV_NV). > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- This e-mail address is not monitored so please do not send me anything important here. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090304/2c36b610/attachment.html>
Vahid Moghaddasi wrote:> Thank you all for reply. I will do the recommendation and see if there > is any change in performance. > I had seen the documents in the link but I needed to make sure that we > can not do anything to improve the performance before going to storage > guys.You may not have much choice, actually. Recently, I was involved with a large proof-of-concept where the "requirement" was hardware RAID-5. After spending months proving it wouldn''t scale, we finally convinced the powers-that-be to make the critical LUNs mirrors. Afterwards, it scaled rather nicely. Moral: you can spend a lot of efforts trying to put lipstick on a pig, and tune databases or ZFS to the max, but in the end, mirrors will always kick butt over RAID-5. -- richard> Thanks again, > Vahid. > > > > On Tue, Mar 3, 2009 at 9:44 PM, David Magda <dmagda at ee.ryerson.ca > <mailto:dmagda at ee.ryerson.ca>> wrote: > > > On Mar 3, 2009, at 20:51, Richard Elling wrote: > > This seems unusual, unless the EMC is mismatched wrt how they > may have > implemented cache flush. The issues around this are described > in the Evil > Tuning Guide > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes > > > Under the 5/08 and snv_72 note, the following text appears: > > The sd and ssd drivers should properly handle the SYNC_NV bit, > so no changes should be needed. > > > I''m assuming this relates to: > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690 > > So the caching-flushing scenario shouldn''t be a problem with newer > Solaris releases on higher-end arrays (assuming they support > SBC-2''s SYNV_NV). > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > -- > This e-mail address is not monitored so please do not send me anything > important here. Thanks. > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Wed, Mar 4, 2009 at 12:18 PM, Richard Elling <richard.elling at gmail.com>wrote:> Vahid Moghaddasi wrote: > >> Thank you all for reply. I will do the recommendation and see if there is >> any change in performance. >> I had seen the documents in the link but I needed to make sure that we can >> not do anything to improve the performance before going to storage guys. >> > > You may not have much choice, actually. Recently, I was involved with a > large proof-of-concept where the "requirement" was hardware RAID-5. > After spending months proving it wouldn''t scale, we finally convinced > the powers-that-be to make the critical LUNs mirrors. Afterwards, it scaled > rather nicely. Moral: you can spend a lot of efforts trying to put > lipstick on > a pig, and tune databases or ZFS to the max, but in the end, mirrors will > always kick butt over RAID-5. > -- richard >We will try to have the storage guys allocate a mirror to this server instead of a raid5 to see if that help. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090304/4cc02229/attachment.html>