After spending some time reading up on this whole deal with SSD with "caches" and how they are prone to data losses during power failures, I need some clarifications... When you guys say "write cache", do you just really mean the on board cache (for both read AND writes)? Or is there a separate cache dedicated to writes? Also, a lot of disks out there have caches (without capacitor/batteries), including the SAS drives from Sun. Aren''t they prone to the same exact situation? Just wanted to get this straight, since SSD+disabling write caches seems to be a big deal now - but this problem exists even with regular disks w/o SSD. The only difference with SSD is that it will wear down much quicker than advertised with write cache disabled. If we tell ZFS to use a slice rather than whole disk, it''ll disable the use of the disk''s cache. This makes me think that ZFS has the option to enable/disable the use of disk cache (I think!). So, shouldn''t there be a zfs property that we can use to enable/disable the use of disk cache? Also, how does ZFS (when using slices) tell the disks to not use the cache? Thanks -- This message posted from opensolaris.org
Also... There is talk about using those cheap disks for rpool. Isn''t rpool also prone to a lot of writes, specifically when the /tmp is in a SSD? What''s the real reason to making those cheap SSD as a rpool rather than a L2ARC? Basically is everyone saying that SSD without NVRAM/capacitors/batteries are best for L2ARC and rpools (where there isn''t much writes going on)? -- This message posted from opensolaris.org
On Thu, 7 Jan 2010, Anil wrote:> After spending some time reading up on this whole deal with SSD with "caches" and how they are prone to data losses during power failures, I need some clarifications... > > When you guys say "write cache", do you just really mean the on > board cache (for both read AND writes)? Or is there a separate cache > dedicated to writes?These details are device dependent. Since the amount of data written back to flash is often larger than (or not perfectly aligned) with a requested write, it is normal for existing flash content to need to be read, updated, and then written. These updates could use a different buffer than buffering of write requests from the host. Regardless, it is important that the updates are written properly to underlying flash, particularly since completely-unassociated data may be re-written.> Also, a lot of disks out there have caches (without > capacitor/batteries), including the SAS drives from Sun. Aren''t they > prone to the same exact situation? Just wanted to get this straight, > since SSD+disabling write caches seems to be a big deal now - but > this problem exists even with regular disks w/o SSD.The problem only exists if the device fails to commit all unwritten data when a cache flush is requested. There may be a number of write requests in between cache flush requests. For zfs, the cache flush requests are needed in order to ensure that the writes corresponding to a transaction group have been written. If the cache flush is ignored and the power subsequently fails, then we have many devices which lack updates for the most recent committed transaction group and the whole pool may be corrupted.> The only difference with SSD is that it will wear down much quicker > than advertised with write cache disabled.This may be true. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Thu, 2010-01-07 at 11:07 -0800, Anil wrote:> There is talk about using those cheap disks for rpool. Isn''t rpool > also prone to a lot of writes, specifically when the /tmp is in a SSD?Huh? By default, solaris uses tmpfs for /tmp, /var/run, and /etc/svc/volatile; writes to those filesystems won''t hit the SSD unless the system is short on physical memory. - Bill
I *am* talking about situations where physical RAM is used up. So definitely the SSD could be touched quite a bit when used as a rpool - for pages in/out. -- This message posted from opensolaris.org
On Jan 7, 2010, at 12:02 PM, Anil wrote:> I *am* talking about situations where physical RAM is used up. So > definitely the SSD could be touched quite a bit when used as a rpool > - for pages in/out.In the cases where rpool does not serve user data (eg. home directories and databases are not in the rpool) you will find that there is actually very little I/O traffic to the rpool. Go ahead and measure it. Even during boot there is only modest traffic. -- richard