Hi, I''ve output of space allocation which I can''t explain. I hope someone can point me at the right direction. The allocation of my "home" filesystem looks like this: joost at onix$ zfs list -o space p0/home NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD p0/home 31.0G 156G 86.7G 69.7G 0 0 This tells me that *86,7G* is used by *snapshots* of this filesystem. However, when I look at the space allocation of the snapshots, I don''t see the 86,7G back! joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD p0/home at s1 - 62.7M - - - - p0/home at s2 - 53.1M - - - - p0/home at s3 - 34.1M - - - - p0/home at s4 - 277M - - - - p0/home at s5 - 2.21G - - - - p0/home at s6 - 175M - - - - p0/home at s7 - 46.1M - - - - p0/home at s8 - 47.6M - - - - p0/home at s9 - 43.0M - - - - p0/home at s10 - 64.1M - - - - p0/home at s11 - 563M - - - - p0/home at s12 - 76.6M - - - - The sum of the USED column is only some 3,6G, so the question is "to what is the 86,7G of USEDSNAP allocated?" Ghost snapshots? This is with zpool version 22. This zpool was used a year or so in onnv-129. I upgraded the host recently to build 151a but I didn''t upgrade the pool yet. Any pointers are appreciated! Joost
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Joost Mulders > > This tells me that *86,7G* is used by *snapshots* of this filesystem. > However, when I look at the space allocation of the snapshots, I don''t > see the 86,7G back! > > joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > p0/home at s1 - 62.7M - - - - > > The sum of the USED column is only some 3,6G, so the question is "to > what is the 86,7G of USEDSNAP allocated?" Ghost snapshots?Actually, maybe. Did you look for clones too, or just snapshots? And temporary clones? You might have more luck with zdb -d p0
Thanks for the pointer. AFAIK there are no clones involved. The output of "zdb -d p0" is below. I found no differences with that and the output of the "zfs list" command. root at onix# zdb -d p0 | egrep ''p0\/home''|sort Dataset p0/home [ZPL], ID 33, cr_txg 432, 69.7G, 192681 objects Dataset p0/home at s1 [ZPL], ID 243, cr_txg 98656, 50.3G, 126890 objects Dataset p0/home at s10 [ZPL], ID 379, cr_txg 953857, 81.5G, 181690 objects Dataset p0/home at s11 [ZPL], ID 168, cr_txg 2387300, 65.5G, 240164 objects Dataset p0/home at s12 [ZPL], ID 172, cr_txg 2439246, 69.1G, 192176 objects Dataset p0/home at s2 [ZPL], ID 245, cr_txg 104759, 50.4G, 127646 objects Dataset p0/home at s3 [ZPL], ID 266, cr_txg 192029, 104G, 171386 objects Dataset p0/home at s4 [ZPL], ID 268, cr_txg 194369, 104G, 171395 objects Dataset p0/home at s5 [ZPL], ID 394, cr_txg 721173, 73.4G, 159843 objects Dataset p0/home at s6 [ZPL], ID 434, cr_txg 777147, 75.3G, 169093 objects Dataset p0/home at s7 [ZPL], ID 281, cr_txg 830390, 75.5G, 171919 objects Dataset p0/home at s8 [ZPL], ID 285, cr_txg 833206, 75.6G, 172117 objects Dataset p0/home at s9 [ZPL], ID 373, cr_txg 948755, 81.5G, 181397 objects I''m new to this and I''m trying to prevent loss of personal data :-) Could it be that the 87.6G of snapshot data is in a dead branch? Have you ever seen similar reports? For completeness; here''s the story. p0 is a mirror of 2x320G 2.5" HD''s. I send/recvd all filesystems from p0 to p1. When this was finished I noticed that p1 used 87.6G less space than p0. The origin of this allocation seems to be in the home filesystem but I can''t find any reference to it. Any pointers appreciated! Joost On 6-12-2010 17:54, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Joost Mulders >> >> This tells me that *86,7G* is used by *snapshots* of this filesystem. >> However, when I look at the space allocation of the snapshots, I don''t >> see the 86,7G back! >> >> joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' >> NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD >> p0/home at s1 - 62.7M - - - - >> >> The sum of the USED column is only some 3,6G, so the question is "to >> what is the 86,7G of USEDSNAP allocated?" Ghost snapshots? > > Actually, maybe. > > Did you look for clones too, or just snapshots? And temporary clones? You > might have more luck with > zdb -d p0 >
I was told that this could be caused by http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6792701 Removing large holey file does not free space However, attempts to verify the cause were fruitless as zdb dumps core: root at onix# zdb -ddd p0 ... partial [2473161,2473169) length 8 outage [0,18446744073709551615) length 18446744073709551615 /dev/dsk/c9t1d0s0 [DTL-required] Dataset p0/tmp [ZPL], ID 112, cr_txg 6724, 40.6M, 1387 objects ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0 Memory fault(coredump) root at onix# I transferred the contents of p0 to p1 via send/receive and I can tell that the p1 pool does *not* have the ghost allocation and zdb -ddd p1 does it''s thing *without* dump. To summarize; there are/were bugs that cause(d) "ghost allocations" and zfs send/receive is a method to remove them. Best regards, Joost On 6-12-2010 13:09, Joost Mulders wrote:> Hi, > > I''ve output of space allocation which I can''t explain. I hope someone > can point me at the right direction. > > The allocation of my "home" filesystem looks like this: > > joost at onix$ zfs list -o space p0/home > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > p0/home 31.0G 156G 86.7G 69.7G 0 0 > > This tells me that *86,7G* is used by *snapshots* of this filesystem. > However, when I look at the space allocation of the snapshots, I don''t > see the 86,7G back! > > joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > p0/home at s1 - 62.7M - - - - > p0/home at s2 - 53.1M - - - - > p0/home at s3 - 34.1M - - - - > p0/home at s4 - 277M - - - - > p0/home at s5 - 2.21G - - - - > p0/home at s6 - 175M - - - - > p0/home at s7 - 46.1M - - - - > p0/home at s8 - 47.6M - - - - > p0/home at s9 - 43.0M - - - - > p0/home at s10 - 64.1M - - - - > p0/home at s11 - 563M - - - - > p0/home at s12 - 76.6M - - - - > > The sum of the USED column is only some 3,6G, so the question is "to > what is the 86,7G of USEDSNAP allocated?" Ghost snapshots? > > This is with zpool version 22. This zpool was used a year or so in > onnv-129. I upgraded the host recently to build 151a but I didn''t > upgrade the pool yet. > > Any pointers are appreciated! > > Joost
"usedsnap" is the amount of space consumed by all snapshots. Ie, the amount of space that would be recovered if all snapshots were to be deleted. The space "used" by any one snapshot is the space that would be recovered if that snapshot was deleted. Ie, the amount of space that is unique to that snapshot. Any space "usedbysnap" that is shared by multiple snapshots will not show up in any snapshot''s "used". Therefore, deleting a snapshot can increase the adjacent snapshots'' "used" space. So in general, "usedbysnaps" >= sum("used" by each snapshot). You can read more about the "used" property in the zfs(1m) manpage. The bug mentioned below (6792701) is not related to this phenomenon, it manifests as a discrepancy between the filesystem''s (or a snapshot''s) "referenced" space and the amount of space accessible through posix interfaces (eg, du(1)). --matt On Mon, Dec 6, 2010 at 4:09 AM, Joost Mulders <joostmnl at gmail.com> wrote:> Hi, > > I''ve output of space allocation which I can''t explain. I hope someone can > point me at the right direction. > > The allocation of my "home" filesystem looks like this: > > joost at onix$ zfs list -o space p0/home > NAME ? ? AVAIL ? USED ?USEDSNAP ?USEDDS ?USEDREFRESERV ?USEDCHILD > p0/home ?31.0G ? 156G ? ? 86.7G ? 69.7G ? ? ? ? ? ? ?0 ? ? ? ? ?0 > > This tells me that *86,7G* is used by *snapshots* of this filesystem. > However, when I look at the space allocation of the snapshots, I don''t see > the 86,7G back! > > joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' > NAME ? ? ? ? ?AVAIL ? USED ?USEDSNAP ?USEDDS ?USEDREFRESERV ?USEDCHILD > p0/home at s1 ? ?- ? ? ? 62.7M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s2 ? ?- ? ? ? 53.1M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s3 ? ?- ? ? ? 34.1M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s4 ? ?- ? ? ? ?277M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s5 ? ?- ? ? ? 2.21G - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s6 ? ?- ? ? ? ?175M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s7 ? ?- ? ? ? 46.1M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s8 ? ?- ? ? ? 47.6M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s9 ? ?- ? ? ? 43.0M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s10 ? - ? ? ? 64.1M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s11 ? - ? ? ? ?563M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > p0/home at s12 ? - ? ? ? 76.6M - ? ? ? ? - ? ? ? - ? ? ? ? ? ? ?- > > The sum of the USED column is only some 3,6G, so the question is "to what is > the 86,7G of USEDSNAP allocated?" Ghost snapshots? > > This is with zpool version 22. This zpool was used a year or so in onnv-129. > I upgraded the host recently to build 151a but I didn''t upgrade the pool > yet. > > Any pointers are appreciated! > > Joost > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hi Matt, That is a very helpful explanation that brings and "area of possible improvement". Right now there''s no way to tell what snapshots to delete to get the space back. Only way is to delete a snapshot and then see if a USED of snap increased. Anyway, thanks for the explanation! Best regards, Joost On Dec 9, 2010, at 6:28 AM, Matthew Ahrens wrote:> "usedsnap" is the amount of space consumed by all snapshots. Ie, the > amount of space that would be recovered if all snapshots were to be > deleted. > > The space "used" by any one snapshot is the space that would be > recovered if that snapshot was deleted. Ie, the amount of space that > is unique to that snapshot. Any space "usedbysnap" that is shared by > multiple snapshots will not show up in any snapshot''s "used". > Therefore, deleting a snapshot can increase the adjacent snapshots'' > "used" space. > > So in general, "usedbysnaps" >= sum("used" by each snapshot). You can > read more about the "used" property in the zfs(1m) manpage. > > The bug mentioned below (6792701) is not related to this phenomenon, > it manifests as a discrepancy between the filesystem''s (or a > snapshot''s) "referenced" space and the amount of space accessible > through posix interfaces (eg, du(1)). > > --matt > > On Mon, Dec 6, 2010 at 4:09 AM, Joost Mulders <joostmnl at gmail.com> wrote: >> Hi, >> >> I''ve output of space allocation which I can''t explain. I hope someone can >> point me at the right direction. >> >> The allocation of my "home" filesystem looks like this: >> >> joost at onix$ zfs list -o space p0/home >> NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD >> p0/home 31.0G 156G 86.7G 69.7G 0 0 >> >> This tells me that *86,7G* is used by *snapshots* of this filesystem. >> However, when I look at the space allocation of the snapshots, I don''t see >> the 86,7G back! >> >> joost at onix$ zfs list -t snapshot -o space | egrep ''NAME|^p0\/home'' >> NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD >> p0/home at s1 - 62.7M - - - - >> p0/home at s2 - 53.1M - - - - >> p0/home at s3 - 34.1M - - - - >> p0/home at s4 - 277M - - - - >> p0/home at s5 - 2.21G - - - - >> p0/home at s6 - 175M - - - - >> p0/home at s7 - 46.1M - - - - >> p0/home at s8 - 47.6M - - - - >> p0/home at s9 - 43.0M - - - - >> p0/home at s10 - 64.1M - - - - >> p0/home at s11 - 563M - - - - >> p0/home at s12 - 76.6M - - - - >> >> The sum of the USED column is only some 3,6G, so the question is "to what is >> the 86,7G of USEDSNAP allocated?" Ghost snapshots? >> >> This is with zpool version 22. This zpool was used a year or so in onnv-129. >> I upgraded the host recently to build 151a but I didn''t upgrade the pool >> yet. >> >> Any pointers are appreciated! >> >> Joost >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>
On Fri, Dec 10, 2010 at 8:49 AM, Joost Mulders <joostmnl at gmail.com> wrote:> Right now there''s no way to tell what snapshots to delete to get the space back. Only way is to delete a snapshot and then see if a USED of snap increased.AFAIK Netapp has a similar problem with their display too, in that it only shows how much space will be freed by deleting an individual snapshot. There''s no way to group a set of snapshots and determine how much space will be freed by deleting all of them without doing it. -B -- Brandon High : bhigh at freaks.com