Thanks for the responses. I read the docs that Cindy suggested and they
were educational but I still don''t understand where the missing disk
space
is. I used the zfs list command and added up all space used. If I''m
reading it right, I have <250GB of snapshots. Zpool list shows that the
pool size (localpool)is 1.81TB, of which 1.68 shows allocated. The
filesystem that I am concerned about is localhome and a du -sk shows that it
is ~650GB in size. This corresponds to the output from df -lk. This is
also in the neighborhood of what I see in the REFER column in zfs list. So
my question remains:
I have 1.68TB of space allocated. Of that, there is ~650GB of actual
filesystem data and <250GB of snapshots. That leaves almost 800GB of space
unaccounted for. I would like to understand if my logic, or method, is
flawed. If not, how can I go about determining what happened to the 800GB?
I am including the output from the zfs list and zpool list commands.
Zpool list:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
localpool 1.81T 1.68T 137G 92% ONLINE -
Zfs list:
NAME USED AVAIL REFER
MOUNTPOINT
localpool 1.68T 109G 24K
/localpool
localpool/backup 21K 109G 21K
/localpool/backup
localpool/localhome 1.68T 109G 624G
/localpool/localhome
localpool/localhome at weekly00310-date2011-11-06-hour00 10.1G - 754G
-
localpool/localhome at weekly00338-date2011-12-04-hour00 4.63G - 847G
-
localpool/localhome at weekly001-date2012-01-01-hour00 5.13G - 938G
-
localpool/localhome at weekly0036-date2012-02-05-hour00 10.3G - 1.06T
-
localpool/localhome at weekly0064-date2012-03-04-hour00 84.1G - 1.22T
-
localpool/localhome at weekly0092-date2012-04-01-hour00 8.43G - 709G
-
localpool/localhome at weekly00127-date2012-05-06-hour00 11.1G - 722G
-
localpool/localhome at weekly00155-date2012-06-03-hour00 20.5G - 737G
-
localpool/localhome at weekly00183-date2012-07-01-hour00 10.9G - 672G
-
localpool/localhome at weekly00190-date2012-07-08-hour00 11.0G - 696G
-
localpool/localhome at weekly00197-date2012-07-15-hour00 7.92G - 662G
-
localpool/localhome at weekly00204-date2012-07-22-hour00 13.5G - 691G
-
localpool/localhome at weekly00211-date2012-07-29-hour00 7.88G - 697G
-
localpool/localhome at 12217-date2012-08-04-hour12 248M - 620G
-
localpool/localhome at 13217-date2012-08-04-hour13 201M - 620G
-
localpool/localhome at 14217-date2012-08-04-hour14 151M - 620G
-
localpool/localhome at 15217-date2012-08-04-hour15 143M - 620G
-
localpool/localhome at 16217-date2012-08-04-hour16 166M - 621G
-
localpool/localhome at 17217-date2012-08-04-hour17 157M - 620G
-
localpool/localhome at 18217-date2012-08-04-hour18 136M - 620G
-
localpool/localhome at 19217-date2012-08-04-hour19 178M - 620G
-
localpool/localhome at 20217-date2012-08-04-hour20 152M - 620G
-
localpool/localhome at 21217-date2012-08-04-hour21 117M - 620G
-
localpool/localhome at 22217-date2012-08-04-hour22 108M - 620G
-
localpool/localhome at 23217-date2012-08-04-hour23 156M - 620G
-
localpool/localhome at weekly00218-date2012-08-05-hour00 34.7M - 620G
-
localpool/localhome at 00218-date2012-08-05-hour00 35.3M - 620G
-
localpool/localhome at 01218-date2012-08-05-hour01 153M - 620G
-
localpool/localhome at 02218-date2012-08-05-hour02 126M - 620G
-
localpool/localhome at 03218-date2012-08-05-hour03 98.0M - 620G
-
localpool/localhome at 04218-date2012-08-05-hour04 318M - 620G
-
localpool/localhome at 05218-date2012-08-05-hour05 4.31G - 624G
-
localpool/localhome at 06218-date2012-08-05-hour06 587M - 621G
-
localpool/localhome at 07218-date2012-08-05-hour07 200M - 621G
-
localpool/localhome at 08218-date2012-08-05-hour08 119M - 621G
-
localpool/localhome at 09218-date2012-08-05-hour09 141M - 621G
-
localpool/localhome at 10218-date2012-08-05-hour10 189M - 621G
-
localpool/localhome at 11218-date2012-08-05-hour11 243M - 621G
-
localpool/localhome at 12218-date2012-08-05-hour12 256M - 621G
-
localpool/localhome at 13218-date2012-08-05-hour13 221M - 621G
-
localpool/localhome at 14218-date2012-08-05-hour14 168M - 621G
-
localpool/localhome at 15218-date2012-08-05-hour15 156M - 621G
-
localpool/localhome at 16218-date2012-08-05-hour16 147M - 621G
-
localpool/localhome at 17218-date2012-08-05-hour17 118M - 621G
-
localpool/localhome at 18218-date2012-08-05-hour18 151M - 621G
-
localpool/localhome at 19218-date2012-08-05-hour19 252M - 621G
-
localpool/localhome at 20218-date2012-08-05-hour20 244M - 621G
-
localpool/localhome at 21218-date2012-08-05-hour21 201M - 621G
-
localpool/localhome at 22218-date2012-08-05-hour22 198M - 621G
-
localpool/localhome at 23218-date2012-08-05-hour23 164M - 621G
-
localpool/localhome at 00219-date2012-08-06-hour00 116M - 621G
-
localpool/localhome at 01219-date2012-08-06-hour01 113M - 621G
-
localpool/localhome at 02219-date2012-08-06-hour02 127M - 621G
-
localpool/localhome at 03219-date2012-08-06-hour03 130M - 621G
-
localpool/localhome at 04219-date2012-08-06-hour04 212M - 621G
-
localpool/localhome at 05219-date2012-08-06-hour05 4.29G - 628G
-
localpool/localhome at 06219-date2012-08-06-hour06 282M - 624G
-
localpool/localhome at 07219-date2012-08-06-hour07 220M - 624G
-
localpool/localhome at 08219-date2012-08-06-hour08 186M - 624G
-
localpool/localhome at 09219-date2012-08-06-hour09 265M - 624G
-
localpool/localhome at 10219-date2012-08-06-hour10 233M - 624G
-
localpool/localhome at 11219-date2012-08-06-hour11 218M - 624G
-
localpool/tmp 28K 109G 28K
/localpool/tmp
thanks,
Burt Hailey
-----Original Message-----
From: zfs-discuss-bounces at opensolaris.org
[mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of
zfs-discuss-request at opensolaris.org
Sent: Saturday, August 04, 2012 6:34 AM
To: zfs-discuss at opensolaris.org
Subject: zfs-discuss Digest, Vol 82, Issue 11
Send zfs-discuss mailing list submissions to
zfs-discuss at opensolaris.org
To subscribe or unsubscribe via the World Wide Web, visit
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body ''help'' to
zfs-discuss-request at opensolaris.org
You can reach the person managing the list at
zfs-discuss-owner at opensolaris.org
When replying, please edit your Subject line so it is more specific than
"Re: Contents of zfs-discuss digest..."
Today''s Topics:
1. Re: Missing disk space (Cindy Swearingen)
2. Re: what have you been buying for slog and l2arc?
(Bob Friesenhahn)
3. Re: what have you been buying for slog and l2arc?
(Hung-Sheng Tsao (LaoTsao) Ph.D)
4. Re: what have you been buying for slog and l2arc? (Neil Perrin)
5. Re: what have you been buying for slog and l2arc? (Eugen Leitl)
6. Re: what have you been buying for slog and l2arc?
(Hung-Sheng Tsao (LaoTsao) Ph.D)
----------------------------------------------------------------------
Message: 1
Date: Fri, 03 Aug 2012 17:03:51 -0600
From: Cindy Swearingen <cindy.swearingen at oracle.com>
To: Burt Hailey <bhailey at triunesystems.com>
Cc: zfs-discuss at opensolaris.org
Subject: Re: [zfs-discuss] Missing disk space
Message-ID: <501C58D7.6050404 at oracle.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
You said you''re new to ZFS so might consider using zpool list and zfs
list
rather df -k to reconcile your disk space.
In addition, your pool type (mirrored on RAIDZ) provides a different space
perspective in zpool list that is not always easy to understand.
http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-6.html#scrolltoc
See these sections:
Displaying ZFS File System Information
Resolving ZFS File System Space Reporting Issues
Let us know if this doesn''t help.
Thanks,
Cindy
On 08/03/12 16:00, Burt Hailey wrote:> I seem to be missing a large amount of disk space and am not sure how
> to locate it. My pool has a total of 1.9TB of disk space. When I run
> df -k I see that the pool is using ~650GB of space and has only ~120GB
> available. Running zfs list shows that my pool (localpool) is using
> 1.67T. When I total up the amount of snapshots I see that they are
> using <250GB. Unless I?m missing something it appears that there is
> ~750GB of disk space that is unaccounted for. We do hourly snapshots.
> Two days ago I deleted 100GB of data and did not see a corresponding
> increase in snapshot sizes. I?m new to zfs and am reading the zfs
> admin handbook but I wanted to post this to get some suggestions on what
to look at.>
> Burt Hailey
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
------------------------------
Message: 2
Date: Fri, 3 Aug 2012 20:39:55 -0500 (CDT)
From: Bob Friesenhahn <bfriesen at simple.dallas.tx.us>
To: Karl Rossing <karl.rossing at barobinson.ca>
Cc: ZFS filesystem discussion list <zfs-discuss at opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
l2arc?
Message-ID:
<alpine.GSO.2.01.1208032035270.27589 at freddy.simplesystems.org>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Fri, 3 Aug 2012, Karl Rossing wrote:
> I''m looking at
> http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-
> drives-ssd.html
> wondering what I should get.
>
> Are people getting intel 330''s for l2arc and 520''s for
slog?
For the slog, you should look for a SLC technology SSD which saves unwritten
data on power failure. In Intel-speak, this is called "Enhanced Power Loss
Data Protection". I am not running across any Intel SSDs which claim to
match these requirements.
Extreme write IOPS claims in consumer SSDs are normally based on large write
caches which can lose even more data if there is a power failure.
Bob
--
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
------------------------------
Message: 3
Date: Fri, 3 Aug 2012 22:05:03 -0400
From: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laotsao at gmail.com>
To: Bob Friesenhahn <bfriesen at simple.dallas.tx.us>
Cc: Karl Rossing <karl.rossing at barobinson.ca>, ZFS filesystem
discussion list <zfs-discuss at opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
l2arc?
Message-ID: <650F7BF0-FC24-4619-A6D6-1D40855C946B at gmail.com>
Content-Type: text/plain; charset="us-ascii"
Intel 311 Series Larsen Creek 20GB 2.5" SATA II SLC Enterprise Solid State
Disk SSDSA2VP020G201
Average Rating
(12 reviews)
Write a Review
Sent from my iPad
On Aug 3, 2012, at 21:39, Bob Friesenhahn <bfriesen at
simple.dallas.tx.us>
wrote:
> On Fri, 3 Aug 2012, Karl Rossing wrote:
>
>> I''m looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.>>
>> Are people getting intel 330''s for l2arc and 520''s
for slog?
>
> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure. In Intel-speak, this is called "Enhanced
Power Loss Data Protection". I am not running across any Intel SSDs which
claim to match these requirements.>
> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power
failure.>
> Bob
> --
> Bob Friesenhahn
> bfriesen at simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120803/5ca5
6ba1/attachment-0001.html>
------------------------------
Message: 4
Date: Fri, 03 Aug 2012 23:29:43 -0600
From: Neil Perrin <neil.perrin at oracle.com>
To: Bob Friesenhahn <bfriesen at simple.dallas.tx.us>
Cc: Karl Rossing <karl.rossing at barobinson.ca>, ZFS filesystem
discussion list <zfs-discuss at opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
l2arc?
Message-ID: <501CB347.50506 at oracle.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 08/03/12 19:39, Bob Friesenhahn wrote:> On Fri, 3 Aug 2012, Karl Rossing wrote:
>
>> I''m looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.>>
>> Are people getting intel 330''s for l2arc and 520''s
for slog?
>
> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure. In Intel-speak, this is called "Enhanced
Power Loss Data Protection". I am not running across any Intel SSDs which
claim to match these requirements.
- That shouldn''t be necessary. ZFS flushes the write cache for any
device
written before returning
from the synchronous request to ensure data stability.
>
>
> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power
failure.>
> Bob
------------------------------
Message: 5
Date: Sat, 4 Aug 2012 11:50:18 +0200
From: Eugen Leitl <eugen at leitl.org>
To: zfs-discuss at opensolaris.org
Subject: Re: [zfs-discuss] what have you been buying for slog and
l2arc?
Message-ID: <20120804095018.GO12615 at leitl.org>
Content-Type: text/plain; charset=us-ascii
On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:
> For the slog, you should look for a SLC technology SSD which saves
> unwritten data on power failure. In Intel-speak, this is called
> "Enhanced Power Loss Data Protection". I am not running across
any
> Intel SSDs which claim to match these requirements.
The
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-710-series.html
seems to qualify:
"Enhanced power-loss data protection. Saves all cached data in the process
of being
written before the Intel SSD 710 Series shuts down, which helps minimize
potential
data loss in the event of an unexpected system power loss."
> Extreme write IOPS claims in consumer SSDs are normally based on large
> write caches which can lose even more data if there is a power failure.
Intel 311 with a good UPS would seem to be a reasonable tradeoff.
------------------------------
Message: 6
Date: Sat, 4 Aug 2012 07:32:37 -0400
From: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laotsao at gmail.com>
To: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laotsao at gmail.com>
Cc: Karl Rossing <karl.rossing at barobinson.ca>, ZFS filesystem
discussion list <zfs-discuss at opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
l2arc?
Message-ID: <5BA00FD9-6AB7-4CB6-910D-154C674F67A3 at gmail.com>
Content-Type: text/plain; charset="us-ascii"
hi
may be check out stec ssd
or checkout the service manual of sun zfs appliance service manual
to see the read and write ssd in the system
regards
Sent from my iPad
On Aug 3, 2012, at 22:05, "Hung-Sheng Tsao (LaoTsao) Ph.D"
<laotsao at gmail.com> wrote:
> Intel 311 Series Larsen Creek 20GB 2.5" SATA II SLC Enterprise Solid
State
Disk SSDSA2VP020G201>
> Average Rating
> (12 reviews)
> Write a Review
>
> Sent from my iPad
>
> On Aug 3, 2012, at 21:39, Bob Friesenhahn <bfriesen at
simple.dallas.tx.us>
wrote:>
>> On Fri, 3 Aug 2012, Karl Rossing wrote:
>>
>>> I''m looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.>>>
>>> Are people getting intel 330''s for l2arc and
520''s for slog?
>>
>> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure. In Intel-speak, this is called "Enhanced
Power Loss Data Protection". I am not running across any Intel SSDs which
claim to match these requirements.>>
>> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power
failure.>>
>> Bob
>> --
>> Bob Friesenhahn
>> bfriesen at simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/>> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120804/68f7
f157/attachment.html>
------------------------------
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
End of zfs-discuss Digest, Vol 82, Issue 11
*******************************************