I have a few old drives here that I thought might help me a little, though not at much as a nice SSD, for those uses. I''d like to speed up NFS writes, and there have been some mentions that even a decent HDD can do this, though not to the same level a good SSD will. The 3 drives are older LVD SCSI Cheetah drives. ST318203LW. I have 2 controllers I could use, one appears to be a RAID controller with a memory module installed. An Adaptec AAA-131U2. The memory module comes up on Google as a 2MB EDO DIMM. Not sure that''s worth anything to me. :) The other controller is an Adaptec 29160. Looks to be a 64-bit PCI card, but the machine it came from is only 32-bit PCI, as is my current machine. What say the pros here? I''m concerned that the max data rate is going to be somewhat low with them, but the seek time should be good as they are 10K RPM (I think). The only reason I thought to use one for L2ARC is for dedupe. It sounds like L2ARC helps a lot there. This is for a home server, so all I''m really looking to do is speed things up a bit while I save and look for a decent SSD option. However, if it''s a waste of time, I''d rather find out before I install them. -- This message posted from opensolaris.org
Travis Tabbal wrote:> I have a few old drives here that I thought might help me a little, though not at much as a nice SSD, for those uses. I''d like to speed up NFS writes, and there have been some mentions that even a decent HDD can do this, though not to the same level a good SSD will. > > The 3 drives are older LVD SCSI Cheetah drives. ST318203LW. I have 2 controllers I could use, one appears to be a RAID controller with a memory module installed. An Adaptec AAA-131U2. The memory module comes up on Google as a 2MB EDO DIMM. Not sure that''s worth anything to me. :) > > The other controller is an Adaptec 29160. Looks to be a 64-bit PCI card, but the machine it came from is only 32-bit PCI, as is my current machine. > > What say the pros here? I''m concerned that the max data rate is going to be somewhat low with them, but the seek time should be good as they are 10K RPM (I think). The only reason I thought to use one for L2ARC is for dedupe. It sounds like L2ARC helps a lot there. This is for a home server, so all I''m really looking to do is speed things up a bit while I save and look for a decent SSD option. However, if it''s a waste of time, I''d rather find out before I install them. >I''d like to hear (or see tests of) how hard drive based ZIL/L2ARC can help RAIDZ performance. Examples would be large RAIDZ arrays such as: 8+ drives in a single RAIDZ1 16+ drives in a single RAIDZ2 24+ drives in a single RAIDZ3 (None of these are a series of smaller RAIDZ arrays that are striped.) From the writings I''ve seen, large non-striped RAIDZ arrays tend to have poor performance that is more or less limited to the I/O capacity of a single disk. The recommendations tend to suggest using smaller RAIDZ arrays and then striping them together whereby the RAIDZ provides redundancy and the striping provides reasonable performance. The advantage of large RAIDZ arrays is you can get better protection from drive failure (e.g. one 16 drive RAIDZ2 can lose any 2 drives vs two 8 drive RAIDZ1 striped arrays that can lose only one drive per array). So what about using a few dedicated two or three way mirrored drives for ZIL and/or L2ARC, in combination with the large RAIDZ arrays? The mirrored ZIL/L2ARC would serve as a cache to the slower RAIDZ. One model for this configuration is the cloud based ZFS test that was done here which used local drives configured as ZIL and L2ARC to minimize the impact of cloud latency, with respectable results: http://blogs.sun.com/jkshah/entry/zfs_with_cloud_storage_and The performance gap between local mirrored disks used for ZIL/L2ARC and a large RAIDZ is not nearly as large as the gap that was addressed in the cloud based ZFS test. Is the gap large enough to potentially benefit from HDD based mirrored ZIL/L2ARCs? Would SSD based ZIL/L2ARCs be necessary to see a worthwhile performance improvement? If this theory works out in practice,useful RAIDZ array sizes may not be as limited as much as they have been to date via best practices guidelines. Admins may then be able to choose to have larger more strongly redundant RAIDZ arrays while still keeping most of the performance of smaller striped RAIDZ arrays by using mirrored ZIL/L2ARC disks or SSDs. -hk
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Travis Tabbal > > I have a few old drives here that I thought might help me a little, > though not at much as a nice SSD, for those uses. I''d like to speed up > NFS writes, and there have been some mentions that even a decent HDD > can do this, though not to the same level a good SSD will.If your clients are mounting "async" don''t bother. If the clients are mounting async, then all the writes are done asynchronously, fully accelerated, and never any data written to ZIL log. If you''d like to measure whether or not you have anything to gain ... Temporarily disable the ZIL on the server. (And remount your filesystem.) If performance doesn''t improve, then you can''t gain anything by using a dedicated ZIL device. If performance does improve ... then you could expect to gain about half of the difference, by using a really good SSD. Rough numbers. Very rough. It''s not advisable, in most cases, to leave the ZIL disabled. It''s valuable after an ungraceful shutdown. So I''d advise only disabling the ZIL while you''re testing for performance.
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Travis TabbalOh, one more thing. Your subject says "ZIL/L2ARC" and your message says "I want to speed up NFS writes." ZIL (log) is used for writes. L2ARC (cache) is used for reads. I''d recommend looking at the ZFS Best Practices Guide.
> If your clients are mounting "async" don''t bother. > If the clients are > ounting async, then all the writes are done > asynchronously, fully > accelerated, and never any data written to ZIL log.I''ve tried async, things run well until you get to the end of the job, then the process hangs until the write is complete. This was just with tar extracting to the NFS drive. -- This message posted from opensolaris.org
> > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss- > > bounces at opensolaris.org] On Behalf Of Travis Tabbal > > Oh, one more thing. Your subject says "ZIL/L2ARC" > and your message says "I > want to speed up NFS writes." > > ZIL (log) is used for writes. > L2ARC (cache) is used for reads. > > I''d recommend looking at the ZFS Best Practices > Guide.At the end of my OP I mentioned that I was interested in L2ARC for dedupe. It sounds like the DDT can get bigger than RAM and slow things to a crawl. Not that I expect a lot from using an HDD for that, but I thought it might help. I''d like to get a nice SSD or two for this stuff, but that''s not in the budget right now. -- This message posted from opensolaris.org
On Mon, Apr 26, 2010 at 8:01 AM, Travis Tabbal <travis at tabbal.net> wrote:> At the end of my OP I mentioned that I was interested in L2ARC for dedupe. It sounds like the DDT can get bigger than RAM and slow things to a crawl. Not that I expect a lot from using an HDD for that, but I thought it might help. I''d like to get a nice SSD or two for this stuff, but that''s not in the budget right now.A large DDT will require a lot of random reads, which isn''t an ideal use case for a spinning disk. Plus, 10k disks are loud and hot. You can get a 30-40gb ssd for about $100 these days. It doesn''t matter if a disk for the L2ARC obeys cache flushing, etc. Regardless of whether the host is shutdown cleanly or not, the L2ARC starts cold. It doesn''t matter if the data is corrupted, because a failed checksum will cause the pool to go back to the data disks. As far as using 10k disks for a slog, it depends on what kind of drives are in your pool and how it''s laid out. If you have a wide raidz stripe on slow disks, just about anything will help. If you''ve got striped mirrors on fast disks, then it probably won''t help much, especially for what sounds like a server with a small number of clients. I''ve got an OCZ Vertex 30gb drive with a 1GB stripe used for the slog and the rest used for the L2ARC, which for ~ $100 has been a nice boost to nfs writes. -B -- Brandon High : bhigh at freaks.com
> I''ve got an OCZ Vertex 30gb drive with a 1GB stripe > used for the slog > and the rest used for the L2ARC, which for ~ $100 has > been a nice > boost to nfs writes.What about the Intel X25-V? I know it will likely be fine for L2ARC, but what about ZIL/slog? -- This message posted from opensolaris.org
For the l2arc you want iops pure an simple. For this I think the Intel SSDs are still king. The slog however has a gotcha, you want a iops, but also you want something that doesn''t say it''s done writing until the write is safely nonvolitile. The intel drives fail in this regard. So far I''m thinking the best bet will likely one of the sandforce sf-1500 based drives with the supercap on it. Something like the Vertex 2 pro. These are of course just my thoughts on the matter as I work towards designing a SQL storage backend. Your mileage may vary. -- This message posted from opensolaris.org