Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time du -hs download
6.4G download
real 87m5.112s
user 0m6.820s
sys 1m46.111s
My guess would be that this is due to fragmentation during some time
when the filesystem might have been close to full, but it is still
pretty terrible numbers even with 0.5M files in the structure. And
while this is very bad I would at least expect the ARC to cache data
and make a second run go faster:
user at host:/var/pkg$ time du -hs download
6.4G download
real 94m14.688s
user 0m6.708s
sys 1m27.105s
Two runs on the machine to which i have transferred the directory
structure:
$ time du -hs download
6.4G download
real 2m59.60s
user 0m3.83s
sys 0m18.87s
This goes a bit faster after the initial run also:
$ time du -hs download
6.4G download
real 0m15.40s
user 0m3.40s
sys 0m11.43s
The disk are of course very busy during the first runs on both
machines, but on the slow machine it has to do all the work again
while the disk in the fast machine gets to rest on the second run.
Slow system (OSOL b124, T61 Intel c2d laptop root pool on 2.5" disk):
memstat pre first run:
Kernel 162685 635 16%
ZFS File Data 81284 317 8%
Anon 57323 223 6%
Exec and libs 3248 12 0%
Page cache 14924 58 1%
Free (cachelist) 7881 30 1%
Free (freelist) 700315 2735 68%
Total 1027660 4014
Physical 1027659 4014
memstat post first run:
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 461153 1801 45%
ZFS File Data 83598 326 8%
Anon 58389 228 6%
Exec and libs 3215 12 0%
Page cache 14958 58 1%
Free (cachelist) 6849 26 1%
Free (freelist) 399498 1560 39%
Total 1027660 4014
Physical 1027659 4014
arcstat first run:
Time read miss miss% dmis dm% pmis pm% mmis mm%
arcsz c
21:02:31 279 19 7 11 4 7 30 11 10 439M
3G
21:12:31 190 60 31 52 28 8 97 60 32 734M
3G
21:22:31 225 58 25 57 25 0 94 58 25 873M
3G
21:32:31 206 51 24 51 24 0 24 50 24 985M
3G
21:42:31 175 43 24 43 24 0 29 42 24 1G
3G
21:52:31 162 48 29 48 29 0 54 48 29 1G
3G
22:02:31 159 55 34 54 34 0 90 55 34 1G
3G
22:12:31 164 41 25 41 24 0 61 41 25 1G
3G
22:22:31 161 40 24 40 24 0 68 40 24 1G
3G
arcstat second run:
Time read miss miss% dmis dm% pmis pm% mmis mm%
arcsz c
22:35:52 1K 447 24 429 23 17 47 436 26 1G
3G
22:45:52 163 40 24 40 24 0 75 40 24 1G
3G
22:55:52 161 40 25 40 24 0 86 40 25 1G
3G
23:05:52 159 40 25 39 25 0 71 40 25 1G
3G
23:15:52 158 40 25 40 25 0 86 40 25 1G
3G
23:25:52 158 40 25 40 25 0 100 40 25 1G
3G
23:35:52 157 40 25 40 25 0 100 40 25 1G
3G
23:45:52 158 40 25 40 25 0 100 40 25 1G
3G
23:55:52 160 40 25 40 25 0 100 40 25 1G
3G
00:05:52 156 40 25 40 25 0 100 40 25 1G
3G
Fast system (OSOL b124, AMD Athlon X2 server, tested on root pool on
2.5" SATA disk)
Memstat pre run:
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 160338 626 8%
ZFS File Data 44875 175 2%
Anon 24388 95 1%
Exec and libs 1295 5 0%
Page cache 6490 25 0%
Free (cachelist) 4786 18 0%
Free (freelist) 1753978 6851 88%
Balloon 0 0 0%
Total 1996150 7797
Memstat post run:
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 516130 2016 26%
ZFS File Data 44942 175 2%
Anon 24414 95 1%
Exec and libs 1293 5 0%
Page cache 6490 25 0%
Free (cachelist) 3557 13 0%
Free (freelist) 1399324 5466 70%
Balloon 0 0 0%
Total 1996150 7797
arcstat first run:
Time read miss miss% dmis dm% pmis pm% mmis mm%
arcsz c
00:23:49 14K 821 5 550 4 270 21 556 8 268M
7G
00:23:59 3K 828 22 646 21 181 26 827 22 322M
7G
00:24:09 5K 1K 23 1K 23 98 21 1K 23 375M
7G
00:24:19 6K 1K 18 1K 20 97 7 1K 18 429M
7G
00:24:29 8K 1K 16 1K 19 80 4 1K 16 487M
7G
00:24:39 6K 1K 22 1K 23 36 5 1K 22 542M
7G
00:24:49 5K 1K 23 1K 23 102 20 1K 23 602M
7G
00:24:59 9K 1K 14 1K 17 112 4 1K 14 667M
7G
00:25:09 9K 1K 15 1K 18 69 3 1K 15 722M
7G
00:25:19 5K 1K 21 1K 22 58 9 1K 21 767M
7G
00:25:29 6K 1K 22 1K 23 48 11 1K 22 822M
7G
arcstat second run:
Time read miss miss% dmis dm% pmis pm% mmis mm%
arcsz c
00:27:59 138K 24K 17 23K 19 1K 8 24K 18 1G
7G
00:28:09 47K 58 0 0 0 58 10 58 0 1G
7G
00:28:19 46K 46 0 0 0 46 2 46 0 1G
7G
Regards
Henrik
http://sparcv9.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091016/d3a4cb9a/attachment.html>
I was a bit to fast, I have added some basic data on the pools and the root filesystem on which the files are located. On Oct 16, 2009, at 12:48 AM, Henrik Johansson wrote:> Hello, > > ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. > > I am running OSOL on my laptop, currently b124 and i found that the > performance of ZFS is not optimal in all situations. If i check the > how much space the package cache for pkg(1) uses, it takes a bit > longer on this host than on comparable machine to which i > transferred all the data. > > user at host:/var/pkg$ time du -hs download > 6.4G download > real 87m5.112s > user 0m6.820s > sys 1m46.111s > > My guess would be that this is due to fragmentation during some time > when the filesystem might have been close to full, but it is still > pretty terrible numbers even with 0.5M files in the structure. And > while this is very bad I would at least expect the ARC to cache data > and make a second run go faster: > > user at host:/var/pkg$ time du -hs download > 6.4G download > real 94m14.688s > user 0m6.708s > sys 1m27.105s > > Two runs on the machine to which i have transferred the directory > structure: > > $ time du -hs download > 6.4G download > real 2m59.60s > user 0m3.83s > sys 0m18.87s > > This goes a bit faster after the initial run also: > > $ time du -hs download > 6.4G download > real 0m15.40s > user 0m3.40s > sys 0m11.43s > > The disk are of course very busy during the first runs on both > machines, but on the slow machine it has to do all the work again > while the disk in the fast machine gets to rest on the second run. > > Slow system (OSOL b124, T61 Intel c2d laptop root pool on 2.5" disk): > memstat pre first run: > Kernel 162685 635 16% > ZFS File Data 81284 317 8% > Anon 57323 223 6% > Exec and libs 3248 12 0% > Page cache 14924 58 1% > Free (cachelist) 7881 30 1% > Free (freelist) 700315 2735 68% > > Total 1027660 4014 > Physical 1027659 4014 > > memstat post first run: > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 461153 1801 45% > ZFS File Data 83598 326 8% > Anon 58389 228 6% > Exec and libs 3215 12 0% > Page cache 14958 58 1% > Free (cachelist) 6849 26 1% > Free (freelist) 399498 1560 39% > > Total 1027660 4014 > Physical 1027659 4014 > > arcstat first run: > Time read miss miss% dmis dm% pmis pm% mmis mm% > arcsz c > 21:02:31 279 19 7 11 4 7 30 11 10 > 439M 3G > 21:12:31 190 60 31 52 28 8 97 60 32 > 734M 3G > 21:22:31 225 58 25 57 25 0 94 58 25 > 873M 3G > 21:32:31 206 51 24 51 24 0 24 50 24 > 985M 3G > 21:42:31 175 43 24 43 24 0 29 42 24 > 1G 3G > 21:52:31 162 48 29 48 29 0 54 48 29 > 1G 3G > 22:02:31 159 55 34 54 34 0 90 55 34 > 1G 3G > 22:12:31 164 41 25 41 24 0 61 41 25 > 1G 3G > 22:22:31 161 40 24 40 24 0 68 40 24 > 1G 3G > > arcstat second run: > Time read miss miss% dmis dm% pmis pm% mmis mm% > arcsz c > 22:35:52 1K 447 24 429 23 17 47 436 26 > 1G 3G > 22:45:52 163 40 24 40 24 0 75 40 24 > 1G 3G > 22:55:52 161 40 25 40 24 0 86 40 25 > 1G 3G > 23:05:52 159 40 25 39 25 0 71 40 25 > 1G 3G > 23:15:52 158 40 25 40 25 0 86 40 25 > 1G 3G > 23:25:52 158 40 25 40 25 0 100 40 25 > 1G 3G > 23:35:52 157 40 25 40 25 0 100 40 25 > 1G 3G > 23:45:52 158 40 25 40 25 0 100 40 25 > 1G 3G > 23:55:52 160 40 25 40 25 0 100 40 25 > 1G 3G > 00:05:52 156 40 25 40 25 0 100 40 25 > 1G 3G > > > Fast system (OSOL b124, AMD Athlon X2 server, tested on root pool on > 2.5" SATA disk) > Memstat pre run: > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 160338 626 8% > ZFS File Data 44875 175 2% > Anon 24388 95 1% > Exec and libs 1295 5 0% > Page cache 6490 25 0% > Free (cachelist) 4786 18 0% > Free (freelist) 1753978 6851 88% > Balloon 0 0 0% > > Total 1996150 7797 > > Memstat post run: > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 516130 2016 26% > ZFS File Data 44942 175 2% > Anon 24414 95 1% > Exec and libs 1293 5 0% > Page cache 6490 25 0% > Free (cachelist) 3557 13 0% > Free (freelist) 1399324 5466 70% > Balloon 0 0 0% > > Total 1996150 7797 > > arcstat first run: > Time read miss miss% dmis dm% pmis pm% mmis mm% > arcsz c > 00:23:49 14K 821 5 550 4 270 21 556 8 > 268M 7G > 00:23:59 3K 828 22 646 21 181 26 827 22 > 322M 7G > 00:24:09 5K 1K 23 1K 23 98 21 1K 23 > 375M 7G > 00:24:19 6K 1K 18 1K 20 97 7 1K 18 > 429M 7G > 00:24:29 8K 1K 16 1K 19 80 4 1K 16 > 487M 7G > 00:24:39 6K 1K 22 1K 23 36 5 1K 22 > 542M 7G > 00:24:49 5K 1K 23 1K 23 102 20 1K 23 > 602M 7G > 00:24:59 9K 1K 14 1K 17 112 4 1K 14 > 667M 7G > 00:25:09 9K 1K 15 1K 18 69 3 1K 15 > 722M 7G > 00:25:19 5K 1K 21 1K 22 58 9 1K 21 > 767M 7G > 00:25:29 6K 1K 22 1K 23 48 11 1K 22 > 822M 7G > > arcstat second run: > Time read miss miss% dmis dm% pmis pm% mmis mm% > arcsz c > 00:27:59 138K 24K 17 23K 19 1K 8 24K 18 > 1G 7G > 00:28:09 47K 58 0 0 0 58 10 58 0 > 1G 7G > 00:28:19 46K 46 0 0 0 46 2 46 0 > 1G 7G >Slow pool: user at host:/$ zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 36.6G 2.80G 82.5K /rpool rpool/ROOT 15.6G 2.80G 18K legacy rpool/ROOT/opensolaris-12 15.6G 2.80G 15.6G / rpool/comp_off 94.0M 2.80G 94.0M /rpool/comp_off rpool/comp_on 58.7M 2.80G 58.7M /rpool/comp_on rpool/dump 1003M 2.80G 1003M - rpool/export 18.1G 2.80G 7.17G /export rpool/export/home 10.9G 2.80G 4.00G /export/home rpool/export/home/user 6.93G 2.80G 6.93G /export/home/user rpool/share 733M 2.80G 733M /rpool/share rpool/swap 1020M 2.80G 1020M - rpool/zdata01 19K 250M 19K /zdata01 rpool/zdata02 19K 250M 19K /rpool/zdata02 user at host:/$ zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c8d0s0 ONLINE 0 0 0 errors: No known data errors zfs get all rpool/ROOT/opensolaris-12 NAME PROPERTY VALUE SOURCE rpool/ROOT/opensolaris-12 type filesystem - rpool/ROOT/opensolaris-12 creation Sun Oct 4 14:01 2009 - rpool/ROOT/opensolaris-12 used 15.6G - rpool/ROOT/opensolaris-12 available 2.80G - rpool/ROOT/opensolaris-12 referenced 15.6G - rpool/ROOT/opensolaris-12 compressratio 1.00x - rpool/ROOT/opensolaris-12 mounted yes - rpool/ROOT/opensolaris-12 quota none default rpool/ROOT/opensolaris-12 reservation none default rpool/ROOT/opensolaris-12 recordsize 128K default rpool/ROOT/opensolaris-12 mountpoint / local rpool/ROOT/opensolaris-12 sharenfs off default rpool/ROOT/opensolaris-12 checksum on default rpool/ROOT/opensolaris-12 compression off default rpool/ROOT/opensolaris-12 atime on default rpool/ROOT/opensolaris-12 devices on default rpool/ROOT/opensolaris-12 exec on default rpool/ROOT/opensolaris-12 setuid on default rpool/ROOT/opensolaris-12 readonly off default rpool/ROOT/opensolaris-12 zoned off default rpool/ROOT/opensolaris-12 snapdir hidden default rpool/ROOT/opensolaris-12 aclmode groupmask default rpool/ROOT/opensolaris-12 aclinherit restricted default rpool/ROOT/opensolaris-12 canmount noauto local rpool/ROOT/opensolaris-12 shareiscsi off default rpool/ROOT/opensolaris-12 xattr on default rpool/ROOT/opensolaris-12 copies 1 default rpool/ROOT/opensolaris-12 version 3 - rpool/ROOT/opensolaris-12 utf8only off - rpool/ROOT/opensolaris-12 normalization none - rpool/ROOT/opensolaris-12 casesensitivity sensitive - rpool/ROOT/opensolaris-12 vscan off default rpool/ROOT/opensolaris-12 nbmand off default rpool/ROOT/opensolaris-12 sharesmb off default rpool/ROOT/opensolaris-12 refquota none default rpool/ROOT/opensolaris-12 refreservation none default rpool/ROOT/opensolaris-12 primarycache all default rpool/ROOT/opensolaris-12 secondarycache all default rpool/ROOT/opensolaris-12 usedbysnapshots 0 - rpool/ROOT/opensolaris-12 usedbydataset 15.6G - rpool/ROOT/opensolaris-12 usedbychildren 0 - rpool/ROOT/opensolaris-12 usedbyrefreservation 0 - rpool/ROOT/opensolaris-12 logbias latency default rpool/ROOT/opensolaris-12 org.opensolaris.libbe:policy static local rpool/ROOT/opensolaris-12 org.opensolaris.libbe:uuid 4045fcfb-6c22-4ec9-9777-bd0c2659a52c local rpool/ROOT/opensolaris-12 org.opensolaris.caiman:install ready inherited from rpool rpool/ROOT/opensolaris-12 com.sun:auto-snapshot false inherited from rpool fast pool: user at host:/$ zfs list -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool 18.9G 98.2G 82.5K /rpool rpool/ROOT 11.0G 98.2G 21K legacy rpool/ROOT/opensolaris 11.0G 98.2G 10.9G / rpool/dump 3.94G 98.2G 3.94G - rpool/export 163K 98.2G 23K /export rpool/export/home 140K 98.2G 23K /export/home rpool/export/home/user 117K 98.2G 117K /export/home/user rpool/swap 3.94G 102G 106M - user at host:/$ zfs list -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool 18.9G 98.2G 82.5K /rpool rpool/ROOT 11.0G 98.2G 21K legacy rpool/ROOT/opensolaris 11.0G 98.2G 10.9G / rpool/dump 3.94G 98.2G 3.94G - rpool/export 163K 98.2G 23K /export rpool/export/home 140K 98.2G 23K /export/home rpool/export/home/user 117K 98.2G 117K /export/home/user rpool/swap 3.94G 102G 106M - zfs get all rpool/ROOT/opensolaris NAME PROPERTY VALUE SOURCE rpool/ROOT/opensolaris type filesystem - rpool/ROOT/opensolaris creation Sun Oct 4 23:56 2009 - rpool/ROOT/opensolaris used 11.0G - rpool/ROOT/opensolaris available 98.2G - rpool/ROOT/opensolaris referenced 10.9G - rpool/ROOT/opensolaris compressratio 1.00x - rpool/ROOT/opensolaris mounted yes - rpool/ROOT/opensolaris quota none default rpool/ROOT/opensolaris reservation none default rpool/ROOT/opensolaris recordsize 128K default rpool/ROOT/opensolaris mountpoint / local rpool/ROOT/opensolaris sharenfs off default rpool/ROOT/opensolaris checksum on default rpool/ROOT/opensolaris compression off default rpool/ROOT/opensolaris atime on default rpool/ROOT/opensolaris devices on default rpool/ROOT/opensolaris exec on default rpool/ROOT/opensolaris setuid on default rpool/ROOT/opensolaris readonly off default rpool/ROOT/opensolaris zoned off default rpool/ROOT/opensolaris snapdir hidden default rpool/ROOT/opensolaris aclmode groupmask default rpool/ROOT/opensolaris aclinherit restricted default rpool/ROOT/opensolaris canmount noauto local rpool/ROOT/opensolaris shareiscsi off default rpool/ROOT/opensolaris xattr on default rpool/ROOT/opensolaris copies 1 default rpool/ROOT/opensolaris version 4 - rpool/ROOT/opensolaris utf8only off - rpool/ROOT/opensolaris normalization none - rpool/ROOT/opensolaris casesensitivity sensitive - rpool/ROOT/opensolaris vscan off default rpool/ROOT/opensolaris nbmand off default rpool/ROOT/opensolaris sharesmb off default rpool/ROOT/opensolaris refquota none default rpool/ROOT/opensolaris refreservation none default rpool/ROOT/opensolaris primarycache all default rpool/ROOT/opensolaris secondarycache all default rpool/ROOT/opensolaris usedbysnapshots 164M - rpool/ROOT/opensolaris usedbydataset 10.9G - rpool/ROOT/opensolaris usedbychildren 0 - rpool/ROOT/opensolaris usedbyrefreservation 0 - rpool/ROOT/opensolaris logbias latency default rpool/ROOT/opensolaris org.opensolaris.libbe:uuid e1c61522-2276- c1f0-d16b-d7314c2099d2 local rpool/ROOT/opensolaris org.opensolaris.caiman:install ready inherited from rpool Regards Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091016/36c14252/attachment.html>
On Oct 16, 2009, at 1:01 AM, Henrik Johansson wrote:>> >> My guess would be that this is due to fragmentation during some >> time when the filesystem might have been close to full, but it is >> still pretty terrible numbers even with 0.5M files in the >> structure. And while this is very bad I would at least expect the >> ARC to cache data and make a second run go faster:I solved this, the second run was also slow since the metadata part of the ARC was to small, raising arc_meta_limit helped, and turning off atime also helped much since this directory seem to be terrible fragmented. With these changes the ARC helps so that the second goes as fast as it should. The fragmentation can be solved by a copy if I would want to keep the files. I wrote some more details about what I did if anyone is interested: http://sparcv9.blogspot.com/2009/10/curious-case-of-strange-arc.html I''ll make sure to keep some more free space in my pools at all times now ;) Regards Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091029/e5dfd0cc/attachment.html>
So the solution is to never get more than 90% full disk space, f?r fan? -- This message posted from opensolaris.org
On Thu, 29 Oct 2009, Orvar Korvar wrote:> So the solution is to never get more than 90% full disk space, f?r fan?Right. While UFS created artificial limits to keep the filesystem from getting so full that it became sluggish and "sick", ZFS does not seem to include those protections. Don''t ever run a ZFS pool for a long duration of time at very close to full since it will become excessively fragmented. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:> On Thu, 29 Oct 2009, Orvar Korvar wrote: > >> So the solution is to never get more than 90% full disk space, f?r >> fan? > > Right. While UFS created artificial limits to keep the filesystem > from getting so full that it became sluggish and "sick", ZFS does > not seem to include those protections. Don''t ever run a ZFS pool > for a long duration of time at very close to full since it will > become excessively fragmented.Setting quotas for all dataset could perhaps be of use for some of us. A "?berquota" property for the whole pool would have been nice until a real solution is available. Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091029/c303dc24/attachment.html>
> So the solution is to never get more than 90% full disk spacewhile that''s true, its not Henrik''s main discovery. Henrik points out that 1/4 of the arc is used for metadata, and sometime that''s not enough.. if echo "::arc" | mdb -k | egrep ^size isn''t reaching echo "::arc" | mdb -k | egrep "^c " and you are maxing out your metadata space, check: echo "::arc" | mdb -k | grep meta_ one can set the metadata space (1G in this case) with: echo "arc_meta_limit/Z 0x4000000" | mdb -kw So while Henrik''s FS had some fragmentation, 1/4 of c_max wasn''t enough metadata arc space for number of files in /var/pkg/download good find Henrik! Rob
On Oct 29, 2009, at 15:08, Henrik Johansson wrote:> On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote: > >> On Thu, 29 Oct 2009, Orvar Korvar wrote: >> >>> So the solution is to never get more than 90% full disk space, f?r >>> fan? >> >> Right. While UFS created artificial limits to keep the filesystem >> from getting so full that it became sluggish and "sick", ZFS does >> not seem to include those protections. Don''t ever run a ZFS pool >> for a long duration of time at very close to full since it will >> become excessively fragmented. > > Setting quotas for all dataset could perhaps be of use for some of > us. A "?berquota" property for the whole pool would have been nice > until a real solution is available.Or create "lost+found" with ''zfs create'' and give it a reservation. The ''directory name'' won''t look too much out of place, and there''ll be some space set aside.
>>>>> "hj" == Henrik Johansson <henrikj at henkis.net> writes:hj> A "?berquota" property for the whole pool would have been nice hj> [to get out-of-space errors instead of fragmentation] just make an empty filesystem with a reservation. That''s what I do. NAME USED AVAIL REFER MOUNTPOINT andaman 3.71T 382G 18K none andaman/arrchive 3.07T 382G 67.7G /arrchive andaman/balloon 18K 1010G 18K none terabithia:/export/home/guest/Azureus Downloads# zfs get reservation andaman/balloon NAME PROPERTY VALUE SOURCE andaman/balloon reservation 628G local -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091102/76f7dc7b/attachment.bin>