I have two storages, both on snv133. Both filled with 1TB drives. 1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is (7-1)*2=12TB 2) zfs pool over HW raid, also 12TB. Both storages keeps the same data with minor differences. First pool keeps 24 hourly snapshots + 7 daily snapshots. Second one (backup) keeps only daily snapshots, but for longer period (2 weeks for now). But they reports strangely different sizes which can''t be explained by differences in snapshots I believe. 1) # zpool list export NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT export 12.6T 3.80T 8.82T 30% 1.00x ONLINE - # zfs list export NAME USED AVAIL REFER MOUNTPOINT export 3.24T 7.35T 40.9K /export 2) # zpool list export NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT export 12.6T 3.19T 9.44T 25% 1.00x ONLINE - # zfs list export NAME USED AVAIL REFER MOUNTPOINT export 3.19T 9.24T 25K /export As we see, both pools have the same size according to "zpool". As we see, for second storage size reported by "zpool list" and sum of used and avail in "zfs list" are in agreement. But for first one, 2TB is missing somehow, sum of USED and avail is 10.6 TB. Also what makes me a bit wonder, is that I would expect more space to be used on backup pool (more daily snapshots), but if "zfs list" can be explained that amount taken by hourly snapshots is bigger than amount taken by extra 7 daily snapshots on backup storage (difference is 50GB which is still pretty big, taking into account that on backup storage we have also extra 10 gig of backup of rpool from primary storage), there is no way for this explanation to be valid for difference in USED reported by "zpool list". 600GB is much more than any possible difference coming from storing different snapshots, because our guys just don''t produce so much of data daily. Also I tried to look how much of space is refereed by hourly snapshots - no way to be even close to 600GB. What''s wrong there? My main concern, though, is difference between zpool size and sum of used+avail for zfs on primary storage. 2TB is 2TB! -- This message posted from opensolaris.org
Just to make it a bit more clear this is first pool NAME STATE READ WRITE CKSUM export ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c5t1d0 ONLINE 0 0 0 c5t2d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c5t4d0 ONLINE 0 0 0 c5t5d0 ONLINE 0 0 0 c5t6d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c5t8d0 ONLINE 0 0 0 c5t9d0 ONLINE 0 0 0 c5t10d0 ONLINE 0 0 0 c5t11d0 ONLINE 0 0 0 c5t12d0 ONLINE 0 0 0 c5t13d0 ONLINE 0 0 0 spares c5t14d0 AVAIL and this is second pool NAME STATE READ WRITE CKSUM export ONLINE 0 0 0 c4t0d1 ONLINE 0 0 0 first pool is made of 1TB drives. -- This message posted from opensolaris.org
On Mar 25, 2010, at 7:25 PM, antst wrote:> I have two storages, both on snv133. Both filled with 1TB drives. > 1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is (7-1)*2=12TB > 2) zfs pool over HW raid, also 12TB. > > Both storages keeps the same data with minor differences. First pool keeps 24 hourly snapshots + 7 daily snapshots. Second one (backup) keeps only daily snapshots, but for longer period (2 weeks for now).Good idea :-)> But they reports strangely different sizes which can''t be explained by differences in snapshots I believe. > > 1) > # zpool list export > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > export 12.6T 3.80T 8.82T 30% 1.00x ONLINE - > > # zfs list export > NAME USED AVAIL REFER MOUNTPOINT > export 3.24T 7.35T 40.9K /export > > 2) > # zpool list export > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > export 12.6T 3.19T 9.44T 25% 1.00x ONLINE - > > # zfs list export > NAME USED AVAIL REFER MOUNTPOINT > export 3.19T 9.24T 25K /export > > As we see, both pools have the same size according to "zpool".Correct.> As we see, for second storage size reported by "zpool list" and sum of used and avail in "zfs list" are in agreement.Correct.> But for first one, 2TB is missing somehow, sum of USED and avail is 10.6 TB.Correct. To understand this, please see the ZFS FAQ: http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HWhydoesntthespacethatisreportedbythezpoollistcommandandthezfslistcommandmatch [richard pauses to look in awe at the aforementioned URL...] -- richard> Also what makes me a bit wonder, is that I would expect more space to be used on backup pool (more daily snapshots), but if "zfs list" can be explained that amount taken by hourly snapshots is bigger than amount taken by extra 7 daily snapshots on backup storage (difference is 50GB which is still pretty big, taking into account that on backup storage we have also extra 10 gig of backup of rpool from primary storage), there is no way for this explanation to be valid for difference in USED reported by "zpool list". 600GB is much more than any possible difference coming from storing different snapshots, because our guys just don''t produce so much of data daily. Also I tried to look how much of space is refereed by hourly snapshots - no way to be even close to 600GB. > > What''s wrong there? My main concern, though, is difference between zpool size and sum of used+avail for zfs on primary storage. 2TB is 2TB! > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
On Mar 26, 2010, at 9:26 PM, Richard Elling wrote:> On Mar 25, 2010, at 7:25 PM, antst wrote: > >> I have two storages, both on snv133. Both filled with 1TB drives. >> 1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is (7-1)*2=12TB >> 2) zfs pool over HW raid, also 12TB. >> >> Both storages keeps the same data with minor differences. First pool keeps 24 hourly snapshots + 7 daily snapshots. Second one (backup) keeps only daily snapshots, but for longer period (2 weeks for now). > > Good idea :-)Probably I''m lucky, but we never had real problems with storage, which would require to restore it from backups, but I''ve found that regularly users want to recover one of their files in state it had couple of month ago. Thus, de facto, primary role of our backup storage is to keep daily decrements (originally with rsync) for months :) Clearly, ZFS snapshots are much more elegant, fast and comfortable solution for this task :)>> But for first one, 2TB is missing somehow, sum of USED and avail is 10.6 TB. > > Correct. To understand this, please see the ZFS FAQ: > http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HWhydoesntthespacethatisreportedbythezpoollistcommandandthezfslistcommandmatchYep. I already figured out myself, that I mixed again marketing and real terabytes. My secondary storage has 12 real terabytes. And primary made of 14 marketing terabytes, which slightly more than 12.6 real :) Then it become clear immediately. Anton.