Henrik Johansson
2008-Nov-10 22:55 UTC
[zfs-discuss] Lost space in empty pool (no snapshots)
Hello,
I have a snv101 machine with a three disk raidz pool which allocation
of about 1TB with for no obvious reason, no snapshot, no files,
nothing. I tried to run zdb on the pool to see if I got any useful
info, but it has been working for over two hours without any more
output.
I know when the allocation occurred, I issued a mkfile 1024G command
in the background, but changed my mind and killed the process, after
that the 912G was missing (don''t remember if I actually removed the
test file or what happened). If I copy a file to the /tank filesystem
it uses even more space, but that space is reclaimed after I remove
the file.
I could recreate the pool, it is empty, but I created it to test the
system in the first place so I would like to know what''s going on. I
have tried to export and import the pool, but it stays the same.
Any ideas?
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 912G 1.77T 912G /tank
# ls -alb /tank
total 7
drwxr-xr-x 2 root root 2 Nov 10 22:51 .
drwxr-xr-x 24 root root 26 Nov 10 08:23 ..
# du -hs /tank
2K /tank
# zfs list -t snapshot
no datasets available
# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
errors: No known data errors
# zpool list tank
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 4.06T 1.34T 2.73T 32% ONLINE -
The output from zdb so far, two hours without any more output, zdb
consuming cpu-time and disks are accessed:
# zdb tank
version=13
name=''tank''
state=0
txg=1703
pool_guid=15862877351892785549
hostid=13281026
hostname=''tank''
vdev_tree
type=''root''
id=0
guid=15862877351892785549
children[0]
type=''raidz''
id=0
guid=11705146785403105303
nparity=1
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4500865941504
is_log=0
children[0]
type=''disk''
id=0
guid=16850214711683290971
path=''/dev/dsk/c1t1d0s0''
devid=''id1,sd at
f00caa702490af9ee0008c7080009/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk
at 1,0:a''
whole_disk=1
DTL=42
children[1]
type=''disk''
id=1
guid=8819352398702414737
path=''/dev/dsk/c1t2d0s0''
devid=''id1,sd at
f00caa702490af9ee000a09c2000d/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk
at 2,0:a''
whole_disk=1
DTL=41
children[2]
type=''disk''
id=2
guid=17659718247984334809
path=''/dev/dsk/c1t4d0s0''
devid=''id1,sd at
f00caa702490af9ee000c9e8e0011/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk
at 4,0:a''
whole_disk=1
DTL=40
Uberblock
magic = 0000000000bab10c
version = 13
txg = 2138
guid_sum = 15557077274537276521
timestamp = 1226353932 UTC = Mon Nov 10 22:52:12 2008
Dataset mos [META], ID 0, cr_txg 4, 1.45M, 73 objects
Dataset tank [ZPL], ID 16, cr_txg 1, 912G, 5 objects
# zfs get all tank
NAME PROPERTY VALUE SOURCE
tank type filesystem -
tank creation Fri Nov 7 1:57 2008 -
tank used 912G -
tank available 1.77T -
tank referenced 912G -
tank compressratio 1.00x -
tank mounted yes -
tank quota none default
tank reservation none default
tank recordsize 128K default
tank mountpoint /tank default
tank sharenfs off default
tank checksum on default
tank compression off default
tank atime on default
tank devices on default
tank exec on default
tank setuid on default
tank readonly off default
tank zoned off default
tank snapdir hidden default
tank aclmode groupmask default
tank aclinherit restricted default
tank canmount on default
tank shareiscsi off default
tank xattr on default
tank copies 1 default
tank version 3 -
tank utf8only off -
tank normalization none -
tank casesensitivity sensitive -
tank vscan off default
tank nbmand off default
tank sharesmb off default
tank refquota none default
tank refreservation none default
tank primarycache all default
tank secondarycache all default
tank usedbysnapshots 0 -
tank usedbydataset 912G -
tank usedbychildren 1.45M -
tank usedbyrefreservation 0 -
# zpool get all tank
NAME PROPERTY VALUE SOURCE
tank size 4.06T -
tank used 1.34T -
tank available 2.73T -
tank capacity 32% -
tank altroot - default
tank health ONLINE -
tank guid 15862877351892785549 -
tank version 13 default
tank bootfs - default
tank delegation on default
tank autoreplace off default
tank cachefile - default
tank failmode wait default
tank listsnapshots off default
Regards
Henrik Johansson
http://sparcv9.blogspot.com
Victor Latushkin
2008-Nov-11 00:56 UTC
[zfs-discuss] Lost space in empty pool (no snapshots)
Henrik Johansson wrote:> Hello, > > I have a snv101 machine with a three disk raidz pool which allocation > of about 1TB with for no obvious reason, no snapshot, no files, > nothing. I tried to run zdb on the pool to see if I got any useful > info, but it has been working for over two hours without any more > output. > > I know when the allocation occurred, I issued a mkfile 1024G command > in the background, but changed my mind and killed the process, after > that the 912G was missing (don''t remember if I actually removed the > test file or what happened). If I copy a file to the /tank filesystem > it uses even more space, but that space is reclaimed after I remove > the file. > > I could recreate the pool, it is empty, but I created it to test the > system in the first place so I would like to know what''s going on. I > have tried to export and import the pool, but it stays the same. > > Any ideas?You can try to increase zdb verbosity by adding some -v swiches. Also try dumping all the objects with ''zdb -dddd tank'' (add even more ''d'' for extra verbosity). cheers, victor> > # zfs list tank > NAME USED AVAIL REFER MOUNTPOINT > tank 912G 1.77T 912G /tank > > # ls -alb /tank > total 7 > drwxr-xr-x 2 root root 2 Nov 10 22:51 . > drwxr-xr-x 24 root root 26 Nov 10 08:23 .. > > # du -hs /tank > 2K /tank > > # zfs list -t snapshot > no datasets available > > # zpool status tank > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool list tank > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > tank 4.06T 1.34T 2.73T 32% ONLINE - > > The output from zdb so far, two hours without any more output, zdb > consuming cpu-time and disks are accessed: > > # zdb tank > version=13 > name=''tank'' > state=0 > txg=1703 > pool_guid=15862877351892785549 > hostid=13281026 > hostname=''tank'' > vdev_tree > type=''root'' > id=0 > guid=15862877351892785549 > children[0] > type=''raidz'' > id=0 > guid=11705146785403105303 > nparity=1 > metaslab_array=23 > metaslab_shift=35 > ashift=9 > asize=4500865941504 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=16850214711683290971 > path=''/dev/dsk/c1t1d0s0'' > devid=''id1,sd at f00caa702490af9ee0008c7080009/a'' > phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk at 1,0:a'' > whole_disk=1 > DTL=42 > children[1] > type=''disk'' > id=1 > guid=8819352398702414737 > path=''/dev/dsk/c1t2d0s0'' > devid=''id1,sd at f00caa702490af9ee000a09c2000d/a'' > phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk at 2,0:a'' > whole_disk=1 > DTL=41 > children[2] > type=''disk'' > id=2 > guid=17659718247984334809 > path=''/dev/dsk/c1t4d0s0'' > devid=''id1,sd at f00caa702490af9ee000c9e8e0011/a'' > phys_path=''/pci at 0,0/pci1043,82f2 at 9/disk at 4,0:a'' > whole_disk=1 > DTL=40 > Uberblock > > magic = 0000000000bab10c > version = 13 > txg = 2138 > guid_sum = 15557077274537276521 > timestamp = 1226353932 UTC = Mon Nov 10 22:52:12 2008 > > Dataset mos [META], ID 0, cr_txg 4, 1.45M, 73 objects > Dataset tank [ZPL], ID 16, cr_txg 1, 912G, 5 objects > > # zfs get all tank > NAME PROPERTY VALUE SOURCE > tank type filesystem - > tank creation Fri Nov 7 1:57 2008 - > tank used 912G - > tank available 1.77T - > tank referenced 912G - > tank compressratio 1.00x - > tank mounted yes - > tank quota none default > tank reservation none default > tank recordsize 128K default > tank mountpoint /tank default > tank sharenfs off default > tank checksum on default > tank compression off default > tank atime on default > tank devices on default > tank exec on default > tank setuid on default > tank readonly off default > tank zoned off default > tank snapdir hidden default > tank aclmode groupmask default > tank aclinherit restricted default > tank canmount on default > tank shareiscsi off default > tank xattr on default > tank copies 1 default > tank version 3 - > tank utf8only off - > tank normalization none - > tank casesensitivity sensitive - > tank vscan off default > tank nbmand off default > tank sharesmb off default > tank refquota none default > tank refreservation none default > tank primarycache all default > tank secondarycache all default > tank usedbysnapshots 0 - > tank usedbydataset 912G - > tank usedbychildren 1.45M - > tank usedbyrefreservation 0 - > > # zpool get all tank > NAME PROPERTY VALUE SOURCE > tank size 4.06T - > tank used 1.34T - > tank available 2.73T - > tank capacity 32% - > tank altroot - default > tank health ONLINE - > tank guid 15862877351892785549 - > tank version 13 default > tank bootfs - default > tank delegation on default > tank autoreplace off default > tank cachefile - default > tank failmode wait default > tank listsnapshots off default > > Regards > Henrik Johansson > http://sparcv9.blogspot.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Henrik Johansson
2008-Nov-11 01:22 UTC
[zfs-discuss] Lost space in empty pool (no snapshots)
On Nov 11, 2008, at 1:56 AM, Victor Latushkin wrote:> Henrik Johansson wrote: >> Hello, >> I have a snv101 machine with a three disk raidz pool which >> allocation of about 1TB with for no obvious reason, no snapshot, >> no files, nothing. I tried to run zdb on the pool to see if I got >> any useful info, but it has been working for over two hours >> without any more output. >> I know when the allocation occurred, I issued a mkfile 1024G >> command in the background, but changed my mind and killed the >> process, after that the 912G was missing (don''t remember if I >> actually removed the test file or what happened). If I copy a file >> to the /tank filesystem it uses even more space, but that space is >> reclaimed after I remove the file. >> I could recreate the pool, it is empty, but I created it to test >> the system in the first place so I would like to know what''s going >> on. I have tried to export and import the pool, but it stays the >> same. >> Any ideas? > > You can try to increase zdb verbosity by adding some -v swiches. > Also try dumping all the objects with ''zdb -dddd tank'' (add even > more ''d'' for extra verbosity).Ah, that did provide some more output, I can see the reserved space is indeed meant for the file I created earlier: Dataset tank [ZPL], ID 16, cr_txg 1, 912G, 5 objects Object lvl iblk dblk lsize asize type 0 7 16K 16K 16K 20.0K DMU dnode 1 1 16K 512 512 1.50K ZFS master node 2 1 16K 512 512 1.50K ZFS delete queue 3 1 16K 512 512 1.50K ZFS directory 6 5 16K 128K 1T 912G ZFS plain file <cut> Object lvl iblk dblk lsize asize type 6 5 16K 128K 1T 912G ZFS plain file 264 bonus ZFS znode path ???<object#6> uid 0 gid 0 atime Sun Nov 9 20:12:30 2008 mtime Sun Nov 9 21:50:10 2008 ctime Sun Nov 9 21:50:10 2008 crtime Sun Nov 9 20:12:30 2008 gen 69 mode 100600 size 1099511627776 parent 3 links 0 xattr 0 rdev 0x0000000000000000 [deferred free] [L0 SPA space map] 1000L/200P DVA[0]=<0:70ce8a5400:400> DVA[1]=< 0:1f800421c00:400> DVA[2]=<0:36800067c00:400> fletcher4 lzjb LE contiguous birth =2259 fill=0 cksum=0:0:0:0 [deferred free] [L0 SPA space map] 1000L/400P DVA[0]=<0:70ce8a6000:800> DVA[1]=< 0:1f800422800:800> DVA[2]=<0:36800061800:800> fletcher4 lzjb LE contiguous birth =2259 fill=0 cksum=0:0:0:0 [deferred free] [L0 SPA space map] 1000L/200P DVA[0]=<0:70ce8a8c00:400> DVA[1]=< 0:1f800423c00:400> DVA[2]=<0:36800069c00:400> fletcher4 lzjb LE contiguous birth =2259 fill=0 cksum=0:0:0:0 [deferred free] [L0 DMU dnode] 4000L/800P DVA[0]=<0:70ce8a8000:c00> DVA[1]=<0:1f 800423000:c00> DVA[2]=<0:36800069000:c00> fletcher4 lzjb LE contiguous birth=225 9 fill=0 cksum=0:0:0:0 [deferred free] [L0 DMU dnode] 4000L/a00P DVA[0]=<0:70ce8a7000:1000> DVA[1]=<0:1 f800295000:1000> DVA[2]=<0:36800068000:1000> fletcher4 lzjb LE contiguous birth2259 fill=0 cksum=0:0:0:0 objset 0 object 0 offset 0x0 [L0 DMU objset] 400L/200P DVA[0]=<0:70ce8ad800:400> DVA[1]=<0:1f800429000:400> DVA[2]=<0:3680006e800:400> fletcher4 lzjb LE contigu ous birth=2260 fill=74 cksum=1309351a7b:687cd8ec06d: 12b694ebbc4e8:253a3515eb9248 objset 0 object 0 offset 0x0 [L0 DMU dnode] 4000L/c00P DVA[0]=<0:70ce8ac400:1400 > DVA[1]=<0:1f800427c00:1400> DVA[2]=<0:3680006d400:1400> fletcher4 lzjb LE cont iguous birth=2260 fill=27 cksum=bbcf0aa9db:13ea5e4dc8e7d: 1425e68263d46ff:f14c2da e18c61e93 <cut> objset 16 object 6 offset 0x12f73c0000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c749c0000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 objset 16 object 6 offset 0x12f73e0000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c749f0000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 objset 16 object 6 offset 0x12f7400000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c74a20000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 objset 16 object 6 offset 0x12f7420000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c74a50000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 objset 16 object 6 offset 0x12f7440000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c74a80000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 objset 16 object 6 offset 0x12f7460000 [L0 ZFS plain file] 20000L/ 20000P DVA[0]=<0:c74ab0000:30000> fletcher2 uncompressed LE contiguous birth=164 fill=1 cksum=0:0:0:0 <continue for more than 100MB of output> But, why has this happened, is it any known issue? Regards Henrik Johansson http://sparcv9.blogspot.com
The "deferred free" indicates that these blocks are supposed to be freed at a future time. A quick glance at the code would seem to indicate that this is supposed to happen when the next transaction group is pushed. Apparently it''s not happening on your system ... presumably a bug. -- This message posted from opensolaris.org
Henrik Johansson
2008-Nov-16 00:30 UTC
[zfs-discuss] Lost space in empty pool (no snapshots)
I have done some more tests, it seems that if I create a large file
with mkfile and interrupt the creation, the space that was allocated
is still occupied after I remove the file.
I''m gonna file this as a bug if no one has anything to add to this.
First I create a new pool, on that pool I create a file and interrupt
the creation, after removing that file the space is free again:
# uname -a
SunOS tank 5.11 snv_101 i86pc i386 i86pc
# zpool create tank raidz c1t1d0 c1t2d0 c1t4d0
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 85.9K 2.66T 24.0K /tank
# mkfile 10G /tank/testfile01
^C# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 4.73G 2.66T 4.73G /tank
# rm /tank/testfile01 && sync
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 85.9K 2.66T 24.0K /tank
Now, if I do the same again, but with a very large file:
# mkfile 750G /tank/testfile02
^C# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 11.3G 2.65T 11.3G /tank
# rm /tank/testfile02 && sync
zfs list tank# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 12.2G 2.65T 12.2G /tank
# zpool export tank
# zpool import tank
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 12.2G 2.65T 12.2G /tank
# zpool scrub tank
# zpool status tank
pool: tank
state: ONLINE
scrub: scrub completed after 0h1m with 0 errors on Sun Nov 16 01:17:54
2008
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
errors: No known data errors
Some zdb output:
# zdb -dddd tank |more
Dataset mos [META], ID 0, cr_txg 4, 89.9K, 30 objects, rootbp [L0 DMU
objset] 40
0L/200P DVA[0]=<0:c800026800:400> DVA[1]=<0:19000026800:400>
DVA[2]=<0:26800:400
> fletcher4 lzjb LE contiguous birth=43 fill=30 cksum=af477f73c:
4926037df90:f80a
fd99a65f:2399d9c07818be
Object lvl iblk dblk lsize asize type
0 1 16K 16K 16K 8K DMU dnode
Object lvl iblk dblk lsize asize type
1 1 16K 16K 32K 12.0K object directory
Fat ZAP stats:
Pointer table:
1024 elements
zt_blk: 0
zt_numblks: 0
zt_shift: 10
zt_blks_copied: 0
zt_nextblk: 0
ZAP entries: 7
Leaf blocks: 1
Total blocks: 2
zap_block_type: 0x8000000000000001
zap_magic: 0x2f52ab2ab
zap_salt: 0x1d479cab3
Leafs with 2^n pointers:
9: 1 *
Blocks with n*5 entries:
1: 1 *
Blocks n/10 full:
1: 1 *
Entries with n chunks:
3: 7 *******
Buckets with n entries:
0: 505
****************************************
1: 7 *
sync_bplist = 21
history = 22
root_dataset = 2
errlog_scrub = 0
errlog_last = 0
deflate = 1
config = 20
Object lvl iblk dblk lsize asize type
2 1 16K 512 512 0 DSL directory
256 bonus DSL directory
creation_time = Sun Nov 16 01:11:53 2008
head_dataset_obj = 16
parent_dir_obj = 0
origin_obj = 14
child_dir_zapobj = 4
used_bytes = 12.2G
compressed_bytes = 12.2G
uncompressed_bytes = 12.2G
quota = 0
reserved = 0
props_zapobj = 3
deleg_zapobj = 0
flags = 1
used_breakdown[HEAD] = 12.2G
used_breakdown[SNAP] = 0
used_breakdown[CHILD] = 89.9K
used_breakdown[CHILD_RSRV] = 0
used_breakdown[REFRSRV] = 0
Object lvl iblk dblk lsize asize type
3 1 16K 512 512 2K DSL props
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
4 1 16K 512 512 2K DSL directory child map
microzap: 512 bytes, 2 entries
$MOS = 5
$ORIGIN = 8
Object lvl iblk dblk lsize asize type
5 1 16K 512 512 0 DSL directory
256 bonus DSL directory
creation_time = Sun Nov 16 01:11:53 2008
head_dataset_obj = 0
parent_dir_obj = 2
origin_obj = 0
child_dir_zapobj = 7
used_bytes = 89.9K
compressed_bytes = 25.5K
uncompressed_bytes = 25.5K
quota = 0
reserved = 0
props_zapobj = 6
deleg_zapobj = 0
flags = 1
used_breakdown[HEAD] = 89.9K
used_breakdown[SNAP] = 0
used_breakdown[CHILD] = 0
used_breakdown[CHILD_RSRV] = 0
used_breakdown[REFRSRV] = 0
Object lvl iblk dblk lsize asize type
6 1 16K 512 512 2K DSL props
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
7 1 16K 512 512 2K DSL directory child map
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
8 1 16K 512 512 0 DSL directory
256 bonus DSL directory
creation_time = Sun Nov 16 01:11:53 2008
head_dataset_obj = 11
parent_dir_obj = 2
origin_obj = 0
child_dir_zapobj = 10
used_bytes = 0
compressed_bytes = 0
uncompressed_bytes = 0
quota = 0
reserved = 0
props_zapobj = 9
deleg_zapobj = 0
flags = 1
used_breakdown[HEAD] = 0
used_breakdown[SNAP] = 0
used_breakdown[CHILD] = 0
used_breakdown[CHILD_RSRV] = 0
used_breakdown[REFRSRV] = 0
Object lvl iblk dblk lsize asize type
9 1 16K 512 512 2K DSL props
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
10 1 16K 512 512 2K DSL directory child map
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
11 1 16K 512 512 0 DSL dataset
320 bonus DSL dataset
dir_obj = 8
prev_snap_obj = 14
prev_snap_txg = 1
next_snap_obj = 0
snapnames_zapobj = 12
num_children = 0
creation_time = Sun Nov 16 01:11:53 2008
creation_txg = 1
deadlist_obj = 15
used_bytes = 0
compressed_bytes = 0
uncompressed_bytes = 0
unique = 0
fsid_guid = 9675748958236906
guid = 12025087531231622166
flags = 4
next_clones_obj = 0
props_obj = 0
bp = <hole>
Object lvl iblk dblk lsize asize type
12 1 16K 512 512 2K DSL dataset snap map
microzap: 512 bytes, 1 entries
$ORIGIN = 14
Object lvl iblk dblk lsize asize type
13 1 16K 128K 128K 0 bplist
32 bonus bplist header
Object lvl iblk dblk lsize asize type
14 1 16K 512 512 0 DSL dataset
320 bonus DSL dataset
dir_obj = 8
prev_snap_obj = 0
prev_snap_txg = 0
next_snap_obj = 11
snapnames_zapobj = 0
num_children = 2
creation_time = Sun Nov 16 01:11:53 2008
creation_txg = 1
deadlist_obj = 13
used_bytes = 0
compressed_bytes = 0
uncompressed_bytes = 0
unique = 0
fsid_guid = 48874463137685700
guid = 16799716212201729014
flags = 4
next_clones_obj = 19
props_obj = 0
bp = <hole>
Object lvl iblk dblk lsize asize type
15 1 16K 128K 128K 0 bplist
32 bonus bplist header
Object lvl iblk dblk lsize asize type
16 1 16K 512 512 0 DSL dataset
320 bonus DSL dataset
dir_obj = 2
prev_snap_obj = 14
prev_snap_txg = 1
next_snap_obj = 0
snapnames_zapobj = 17
num_children = 0
creation_time = Sun Nov 16 01:11:53 2008
creation_txg = 1
deadlist_obj = 18
used_bytes = 12.2G
compressed_bytes = 12.2G
uncompressed_bytes = 12.2G
unique = 12.2G
fsid_guid = 69508264734546309
guid = 9993987807350083321
flags = 4
next_clones_obj = 0
props_obj = 0
bp = [L0 DMU objset] 400L/200P
DVA[0]=<0:64652d400:400> DVA[1]=<
0:c80032dc00:400> fletcher4 lzjb LE contiguous birth=28 fill=5
cksum=ae2a07052:4
91babbaac1:f96aca21cd61:2405acfae00513
Object lvl iblk dblk lsize asize type
17 1 16K 512 512 2K DSL dataset snap map
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
18 1 16K 128K 128K 0 bplist
32 bonus bplist header
Object lvl iblk dblk lsize asize type
19 1 16K 512 512 2K DSL dataset next clones
microzap: 512 bytes, 1 entries
10 = 16
Object lvl iblk dblk lsize asize type
20 1 16K 16K 16K 6.00K packed nvlist
8 bonus packed nvlist size
version=13
name=''tank''
state=0
txg=43
pool_guid=15209315969328691824
hostid=13281026
hostname=''tank''
vdev_tree
type=''root''
id=0
guid=15209315969328691824
children[0]
type=''raidz''
id=0
guid=12939139411228926927
nparity=1
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4500865941504
is_log=0
children[0]
type=''disk''
id=0
guid=4433333881730944857
path=''/dev/dsk/c1t1d0s0''
devid=''id1,sd at f00caa702490af9ee0008c7080009/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/
disk at 1,0:a''
whole_disk=1
DTL=30
children[1]
type=''disk''
id=1
guid=8145955088711690996
path=''/dev/dsk/c1t2d0s0''
devid=''id1,sd at f00caa702490af9ee000a09c2000d/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/
disk at 2,0:a''
whole_disk=1
DTL=29
children[2]
type=''disk''
id=2
guid=7829790635521907151
path=''/dev/dsk/c1t4d0s0''
devid=''id1,sd at f00caa702490af9ee000c9e8e0011/a''
phys_path=''/pci at 0,0/pci1043,82f2 at 9/
disk at 4,0:a''
whole_disk=1
DTL=28
Object lvl iblk dblk lsize asize type
21 1 16K 16K 16K 4K bplist (Z=uncompressed)
32 bonus bplist header
Object lvl iblk dblk lsize asize type
22 1 16K 128K 128K 16K SPA history
40 bonus SPA history offsets
Object lvl iblk dblk lsize asize type
23 1 16K 512 512 2K object array
Object lvl iblk dblk lsize asize type
24 1 16K 4K 4K 2K SPA space map
24 bonus SPA space map header
Object lvl iblk dblk lsize asize type
25 1 16K 4K 4K 4K SPA space map
24 bonus SPA space map header
Object lvl iblk dblk lsize asize type
26 1 16K 4K 12.0K 16K SPA space map
24 bonus SPA space map header
Object lvl iblk dblk lsize asize type
28 1 16K 4K 4K 0 SPA space map
24 bonus SPA space map header
Object lvl iblk dblk lsize asize type
29 1 16K 4K 4K 0 SPA space map
24 bonus SPA space map header
Object lvl iblk dblk lsize asize type
30 1 16K 4K 4K 0 SPA space map
24 bonus SPA space map header
Deferred frees: 21 entries, 95.9K
Dirty time logs:
tank
raidz
/dev/dsk/c1t1d0s0
/dev/dsk/c1t2d0s0
/dev/dsk/c1t4d0s0
Metaslabs:
vdev 0
offset spacemap free
------ -------- ----
0 26 13.6G
800000000 0 32G
1000000000 0 32G
1800000000 0 32G
2000000000 0 32G
2800000000 0 32G
3000000000 0 32G
3800000000 0 32G
4000000000 0 32G
4800000000 0 32G
5000000000 0 32G
5800000000 0 32G
6000000000 0 32G
6800000000 0 32G
7000000000 0 32G
7800000000 0 32G
8000000000 0 32G
8800000000 0 32G
9000000000 0 32G
9800000000 0 32G
a000000000 0 32G
a800000000 0 32G
b000000000 0 32G
b800000000 0 32G
c000000000 0 32G
c800000000 25 32.0G
d000000000 0 32G
d800000000 0 32G
e000000000 0 32G
e800000000 0 32G
f000000000 0 32G
f800000000 0 32G
10000000000 0 32G
10800000000 0 32G
11000000000 0 32G
11800000000 0 32G
12000000000 0 32G
12800000000 0 32G
13000000000 0 32G
13800000000 0 32G
14000000000 0 32G
14800000000 0 32G
15000000000 0 32G
15800000000 0 32G
16000000000 0 32G
16800000000 0 32G
17000000000 0 32G
17800000000 0 32G
18000000000 0 32G
18800000000 0 32G
19000000000 24 32.0G
19800000000 0 32G
1a000000000 0 32G
1a800000000 0 32G
1b000000000 0 32G
1b800000000 0 32G
1c000000000 0 32G
1c800000000 0 32G
1d000000000 0 32G
1d800000000 0 32G
1e000000000 0 32G
1e800000000 0 32G
1f000000000 0 32G
1f800000000 0 32G
20000000000 0 32G
20800000000 0 32G
21000000000 0 32G
21800000000 0 32G
22000000000 0 32G
22800000000 0 32G
23000000000 0 32G
23800000000 0 32G
24000000000 0 32G
24800000000 0 32G
25000000000 0 32G
25800000000 0 32G
26000000000 0 32G
26800000000 0 32G
27000000000 0 32G
27800000000 0 32G
28000000000 0 32G
28800000000 0 32G
29000000000 0 32G
29800000000 0 32G
2a000000000 0 32G
2a800000000 0 32G
2b000000000 0 32G
2b800000000 0 32G
2c000000000 0 32G
2c800000000 0 32G
2d000000000 0 32G
2d800000000 0 32G
2e000000000 0 32G
2e800000000 0 32G
2f000000000 0 32G
2f800000000 0 32G
30000000000 0 32G
30800000000 0 32G
31000000000 0 32G
31800000000 0 32G
32000000000 0 32G
32800000000 0 32G
33000000000 0 32G
33800000000 0 32G
34000000000 0 32G
34800000000 0 32G
35000000000 0 32G
35800000000 0 32G
36000000000 0 32G
36800000000 0 32G
37000000000 0 32G
37800000000 0 32G
38000000000 0 32G
38800000000 0 32G
39000000000 0 32G
39800000000 0 32G
3a000000000 0 32G
3a800000000 0 32G
3b000000000 0 32G
3b800000000 0 32G
3c000000000 0 32G
3c800000000 0 32G
3d000000000 0 32G
3d800000000 0 32G
3e000000000 0 32G
3e800000000 0 32G
3f000000000 0 32G
3f800000000 0 32G
40000000000 0 32G
40800000000 0 32G
Dataset tank [ZPL], ID 16, cr_txg 1, 12.2G, 5 objects, rootbp [L0 DMU
objset] 40
0L/200P DVA[0]=<0:64652d400:400> DVA[1]=<0:c80032dc00:400> fletcher4
lzjb LE con
tiguous birth=28 fill=5
cksum=ae2a07052:491babbaac1:f96aca21cd61:2405acfae00513
Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 20.0K DMU dnode
Object lvl iblk dblk lsize asize type
1 1 16K 512 512 1.50K ZFS master node
microzap: 512 bytes, 3 entries
ROOT = 3
DELETE_QUEUE = 2
VERSION = 3
Object lvl iblk dblk lsize asize type
2 1 16K 512 512 1.50K ZFS delete queue
microzap: 512 bytes, 1 entries
5 = 5
Object lvl iblk dblk lsize asize type
3 1 16K 512 512 1.50K ZFS directory
264 bonus ZFS znode
path /
uid 0
gid 0
atime Sun Nov 16 01:11:53 2008
mtime Sun Nov 16 01:14:18 2008
ctime Sun Nov 16 01:14:18 2008
crtime Sun Nov 16 01:11:53 2008
gen 4
mode 40755
size 2
parent 3
links 2
xattr 0
rdev 0x0000000000000000
microzap: 512 bytes, 0 entries
Object lvl iblk dblk lsize asize type
5 5 16K 128K 750G 12.2G ZFS plain file
264 bonus ZFS znode
path ???<object#5>
uid 0
gid 0
atime Sun Nov 16 01:13:12 2008
mtime Sun Nov 16 01:14:07 2008
ctime Sun Nov 16 01:14:07 2008
crtime Sun Nov 16 01:13:12 2008
gen 16
mode 100600
size 805306368000
parent 3
links 0
xattr 0
rdev 0x0000000000000000
Henrik Johansson
http://sparcv9.blogspot.com