Hello everyone, I''m wondering if the following makes sense: To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS drives. For high IOPS, I believe it is best to let ZFS stripe them, instead of doing a raidz1 across them. Therefore, I would like to mirror the drives for reliability. Now, I''m wondering if I can get away with using a large capacity 7200 RPM SATA drive as mirror for multiple SAS drives. For example, say I had 3 SAS drives of 150 GB each. Could I take a 500 GB SATA drive, partition it into 3 vdevs and use each one as a mirror for one SAS drive? I believe this is possible. The problem is in performance. What I want is for all reads to go to the SAS drives so that the SATA drive will only see writes. I''m hoping that due to the copy-on-write nature of ZFS, the writes will get bunched into sequential blocks, so write bandwidth will be good, even on a SATA drive. But, the reads must be kept off the SATA drive. Is there any way I can get ZFS to do that? Thanks, Monish
Use the SAS drives as l2arc for a pool on sata disks. If your l2arc is the full size of your pool, you won''t see reads from the pool (once the cache is primed). If you''re purchasing all the gear from new, consider whether SSD in this mode would be better than 15k sas. -- This message posted from opensolaris.org
Monish Shah wrote:> Hello everyone, > > I''m wondering if the following makes sense: > > To configure a system for high IOPS, I want to have a zpool of 15K RPM > SAS drives. For high IOPS, I believe it is best to let ZFS stripe > them, instead of doing a raidz1 across them. Therefore, I would like > to mirror the drives for reliability.ok, so far.> > Now, I''m wondering if I can get away with using a large capacity 7200 > RPM SATA drive as mirror for multiple SAS drives. For example, say I > had 3 SAS drives of 150 GB each. Could I take a 500 GB SATA drive, > partition it into 3 vdevs and use each one as a mirror for one SAS > drive? I believe this is possible.yes, it is.> The problem is in performance. What I want is for all reads to go to > the SAS drives so that the SATA drive will only see writes. I''m > hoping that due to the copy-on-write nature of ZFS, the writes will > get bunched into sequential blocks, so write bandwidth will be good, > even on a SATA drive. But, the reads must be kept off the SATA drive. > Is there any way I can get ZFS to do that?What sort of performance do you need? Writes tend to be asynchronous (non-blocking) for many apps, unless your running a database or NFS server where synchronous writes are common. In the latter case, invest in a SSD for separate log. Reads tend to get cached in RAM at several places in the data path, so it is much more difficult to predict. IMHO, today, systems which only use HDDs will not be considered high performance in any case. -- richard
The SATA drive will be your bottleneck, and you will lose any speed advantages of the SAS drives, especially using 3 vdevs on a single SATA disk. I am with Richard, figure out what performance you need, and build accordingly. -- This message posted from opensolaris.org
Hello, Thanks to everyone who replied. Dan, your suggestions (quoted below) are excellent and yes, I do want to make this work with SSDs, as well. However, I didn''t tell you one thing. I want to compress the data on the drive. This would be particularly important if an SSD is used, as the cost per GB is high. This is why I wanted to put it in a zpool. Before somebody points out that compression with increase the CPU utilization, I''d like to mention that we have hardware accelerated gzip compression technology already working with ZFS, so the CPU will not be loaded. I''m also hoping that write IOPS will improve with compression, because more writes can be combined into a single block of storage. I don''t know enough about ZFS allocation policies to be sure, but we''ll try to run some tests. It looks like, for now, the mirror disks will also have to be SSDs. (Perhaps raidz1 will be OK, instead.) Eventually, we will look into modifying ZFS to support the kind of asymmetric mirroring I mentioned in the original post. The other alternative is to modify ZFS to compress L2ARC, but that sounds much more complicated to me. Any insights from ZFS developers would be appreciated. Monish ---- Monish Shah CEO, Indra Networks, Inc. www.indranetworks.com> Use the SAS drives as l2arc for a pool on sata disks. If your l2arc is > the full size of your pool, you won''t see reads from the pool (once the > cache is primed). > > If you''re purchasing all the gear from new, consider whether SSD in this > mode would be better than 15k sas. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Monish Shah wrote:> Hello, > > Thanks to everyone who replied. > > Dan, your suggestions (quoted below) are excellent and yes, I do want > to make this work with SSDs, as well. However, I didn''t tell you one > thing. I want to compress the data on the drive. This would be > particularly important if an SSD is used, as the cost per GB is high. > This is why I wanted to put it in a zpool. > > Before somebody points out that compression with increase the CPU > utilization, I''d like to mention that we have hardware accelerated > gzip compression technology already working with ZFS, so the CPU will > not be loaded. > > I''m also hoping that write IOPS will improve with compression, because > more writes can be combined into a single block of storage. I don''t > know enough about ZFS allocation policies to be sure, but we''ll try to > run some tests.Please share what you find. It seems counterintuitive to me that compression would increase iops for small-block, random workloads. But real data is better than intuition :-) -- richard> > It looks like, for now, the mirror disks will also have to be SSDs. > (Perhaps raidz1 will be OK, instead.) Eventually, we will look into > modifying ZFS to support the kind of asymmetric mirroring I mentioned > in the original post. The other alternative is to modify ZFS to > compress L2ARC, but that sounds much more complicated to me. Any > insights from ZFS developers would be appreciated. > > Monish > ---- > Monish Shah > CEO, Indra Networks, Inc. > www.indranetworks.com > > >> Use the SAS drives as l2arc for a pool on sata disks. If your l2arc >> is the full size of your pool, you won''t see reads from the pool >> (once the cache is primed). >> >> If you''re purchasing all the gear from new, consider whether SSD in >> this mode would be better than 15k sas. >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Monish Shah wrote:> The other alternative is to modify ZFS to compress > L2ARC, but that sounds much more complicated to me. Any insights from > ZFS developers would be appreciated.Compressing the L2ARC data shouldn''t be that hard, I had to do something very similar for adding encryption support to the L2ARC. When the L2ARC becomes persistent after reboot it should be possible to have an compressed L2ARC since it should be switching to writing via the normal ZIO pipeline (zio_write) rather than using zio_write_phys. -- Darren J Moffat