Christiaan Willemsen
2008-Oct-02 16:21 UTC
[zfs-discuss] Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running postgreSQL, so we need to cut down ARC memory usage. But before I do this I tested the zfs performance using iometer (it was a bit tricky getting it to compile but it''s running). So far so good. Figures look very promissing, with stagering random read and write figures! There are just a few problems: every few seconds, disk LED''s stop working for a few seconds, except one disk at a time. When this cycle is finished, it looks normal again. This seems to be the flushing of the NVRAM cache. Should be solved by disabling the flush...So far so good... But I also need the memory for PostgreSQL to work, so I added: set zfs:zfs_arc_max=8589934592 to /etc/system and rebooted. Now redid my test, with terrible results.. sequential read is about 120 MB/sec. One disk should be able to handle that, 14 disks should do more than a GB/sec, and in my previous benchmark without the arc_max setting, they actually make these figures...(even when not reading from ARC cache). Random figures are not much better than this. So something is clearly wrong here... Can anyone comment? -- This message posted from opensolaris.org
Roch Bourbonnais
2008-Oct-20 00:30 UTC
[zfs-discuss] Terrible performance when setting zfs_arc_max snv_98
Le 2 oct. 08 ? 09:21, Christiaan Willemsen a ?crit :> Hi there. > > I just got a new Adaptec RAID 51645 controller in because the old > (other type) was malfunctioning. It is paired with 16 Seagate 15k5 > disks, of which two are used with hardware RAID 1 for OpenSolaris > snv_98, and the rest is configured as striped mirrors as a zpool. I > created a zfs filesystem on this pool with a blocksize of 8K. > > This server has 64GB of memory and will be running postgreSQL, so we > need to cut down ARC memory usage. But before I do this I tested the > zfs performance using iometer (it was a bit tricky getting it to > compile but it''s running). > > So far so good. Figures look very promissing, with stagering random > read and write figures! There are just a few problems: every few > seconds, disk LED''s stop working for a few seconds, except one disk > at a time. When this cycle is finished, it looks normal again. This > seems to be the flushing of the NVRAM cache. Should be solved by > disabling the flush...So far so good... > > But I also need the memory for PostgreSQL to work, so I added: > > set zfs:zfs_arc_max=8589934592 > > to /etc/system and rebooted. Now redid my test, with terrible > results.. sequential read is about 120 MB/sec. One disk should be > able to handle that, 14 disks should do more than a GB/sec, and in > my previous benchmark without the arc_max setting, they actually > make these figures...(even when not reading from ARC cache). Random > figures are not much better than this. > > So something is clearly wrong here... Can anyone comment?This looks very unusual. Does it pass the hystereris test ? Set zfs_arc_max reboot low performace, undo the setting reboot, high performance. zpool status could help. zpool iostat 1. The read tests are threaded right ? -r> > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Reasonably Related Threads
- 6Tb Database with ZFS
- S10 (05/08) vs SNV_98 stubdom install at Xen 3.3 CentOS 5.2 Dom0 (64-bit)
- ZFS slows down over a couple of days
- SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
- Some basic questions about getting the best performance for database usage