Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a "dir /s" command on the share from a windows client cmd, I see the file size as 51,193,782,290 bytes. The alloc size reported by zpool along with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes. According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G (19.1G * 1.13) Here is the output from zdb -DD Prompt:~# zdb -DD TestPool DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core DDT histogram (aggregated over all DDTs): bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M 16 1 64K 64K 64K 16 1M 1M 1M 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M 4K 1 64K 64K 64K 5.33K 341M 341M 341M Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13 Am I missing something? Your inputs are much appritiated. Thanks, Giri -- This message posted from opensolaris.org
Henrik Johansson
2009-Dec-15 20:15 UTC
[zfs-discuss] ZFS Dedupe reporting incorrect savings
Hello, On Dec 15, 2009, at 8:02 AM, Giridhar K R wrote:> Hi, > Created a zpool with 64k recordsize and enabled dedupe on it. > zpool create -O recordsize=64k TestPool device1 > zfs set dedup=on TestPool > > I copied files onto this pool over nfs from a windows client. > > Here is the output of zpool list > Prompt:~# zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > TestPool 696G 19.1G 677G 2% 1.13x ONLINE - > > When I ran a "dir /s" command on the share from a windows client cmd, I see the file size as 51,193,782,290 bytes. The alloc size reported by zpool along with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes. > > According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G (19.1G * 1.13)Are you sure this problem is related to ZFS, not a Windows, link or CIFS issue? Have you looked at the filesystem from the OpenSolaris host locally? Are sure there are no links in the filesystems that the windows client also counts? Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091215/d44e4d45/attachment.html>
As I have noted above after editing the initial post, its the same locally too.>>I found that the "ls -l" on the zpool also reports 51,193,782,290 bytes-- This message posted from opensolaris.org
Hi, Reposting as I have not gotten any response. Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on it. -->zpool create -O recordsize=64k TestPool device1 -->zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list --> zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - I ran "ls -l /TestPool" and saw the total size reported as 51,193,782,290 bytes. The alloc size reported by zpool along with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes. According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G (19.1G * 1.13) Here is the output from zdb -DD --> zdb -DD TestPool DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core DDT histogram (aggregated over all DDTs): bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M 16 1 64K 64K 64K 16 1M 1M 1M 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M 4K 1 64K 64K 64K 5.33K 341M 341M 341M Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13 Am I missing something? Your inputs are much appritiated. Thanks, Giri -- This message posted from opensolaris.org
Hi Giridhar, The size reported by ls can include things like holes in the file. What space usage does the zfs(1M) command report for the filesystem? Adam On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:> Hi, > > Reposting as I have not gotten any response. > > Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on it. > -->zpool create -O recordsize=64k TestPool device1 > -->zfs set dedup=on TestPool > > I copied files onto this pool over nfs from a windows client. > > Here is the output of zpool list > --> zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > TestPool 696G 19.1G 677G 2% 1.13x ONLINE - > > I ran "ls -l /TestPool" and saw the total size reported as 51,193,782,290 bytes. > The alloc size reported by zpool along with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes. > > According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G (19.1G * 1.13) > > Here is the output from zdb -DD > > --> zdb -DD TestPool > DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core > DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core > > DDT histogram (aggregated over all DDTs): > > bucket allocated referenced > ______ ______________________________ ______________________________ > refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE > ------ ------ ----- ----- ----- ------ ----- ----- ----- > 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G > 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G > 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M > 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M > 16 1 64K 64K 64K 16 1M 1M 1M > 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M > 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M > 4K 1 64K 64K 64K 5.33K 341M 341M 341M > Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G > > dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13 > > > Am I missing something? > > Your inputs are much appritiated. > > Thanks, > Giri > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Adam Leventhal, Fishworks http://blogs.sun.com/ahl
> Hi Giridhar, > > The size reported by ls can include things like holes > in the file. What space usage does the zfs(1M) > command report for the filesystem? > > Adam > > On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote: > > > Hi, > > > > Reposting as I have not gotten any response. > > > > Here is the issue. I created a zpool with 64k > recordsize and enabled dedupe on it. > > -->zpool create -O recordsize=64k TestPool device1 > > -->zfs set dedup=on TestPool > > > > I copied files onto this pool over nfs from a > windows client. > > > > Here is the output of zpool list > > --> zpool list > > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > > TestPool 696G 19.1G 677G 2% 1.13x ONLINE - > > > > I ran "ls -l /TestPool" and saw the total size > reported as 51,193,782,290 bytes. > > The alloc size reported by zpool along with the > DEDUP of 1.13x does not addup to 51,193,782,290 > bytes. > > > > According to the DEDUP (Dedupe ratio) the amount of > data copied is 21.58G (19.1G * 1.13) > > > > Here is the output from zdb -DD > > > > --> zdb -DD TestPool > > DDT-sha256-zap-duplicate: 33536 entries, size 272 > on disk, 140 in core > > DDT-sha256-zap-unique: 278241 entries, size 274 on > disk, 142 in core > > > > DDT histogram (aggregated over all DDTs): > > > > bucket allocated referenced > > ______ ______________________________ > ______________________________ > > refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE > DSIZE > > ------ ------ ----- ----- ----- ------ ----- ----- > ----- > > 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G > > 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G > > 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M > > 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M > > 16 1 64K 64K 64K 16 1M 1M 1M > > 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M > > 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M > > 4K 1 64K 64K 64K 5.33K 341M 341M 341M > > Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G > > > > dedup = 1.13, compress = 1.00, copies = 1.00, dedup > * compress / copies = 1.13 > > > > > > Am I missing something? > > > > Your inputs are much appritiated. > > > > Thanks, > > Giri > > -- > > This message posted from opensolaris.org > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > > -- > Adam Leventhal, Fishworks > http://blogs.sun.com/ahl > _________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ssThanks for the response Adam. Are you talking about ZFS list? It displays 19.6 as allocated space. What does ZFS treat as hole and how does it identify? Thanks, Giri -- This message posted from opensolaris.org
> Thanks for the response Adam. > > Are you talking about ZFS list? > > It displays 19.6 as allocated space. > > What does ZFS treat as hole and how does it identify?ZFS will compress blocks of zeros down to nothing and treat them like sparse files. 19.6 is pretty close to your computed. Does your pool happen to be 10+1 RAID-Z? Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl