Hello, I''m having a weird issue with my incremental setup. Here is the filesystem as it shows up with zfs list: NAME USED AVAIL REFER MOUNTPOINT Data/FS1 771M 16.1T 116M /Data/FS1 Data/FS1 at 05 10.3G - 1.93T - Data/FS1 at 06 14.7G - 1.93T - Data/FS1 at 07 0 - 1.93T - Everyday, i sync this filesystem remotely with : zfs send -I X Y | ssh blab at blah zfs receive Z Now, i''m having hard time transferring @06 to @07. So i tried to copy the stream directly on the local filesystem to find out that the size of the stream was more than 50G! Anyone know why my stream is way bigger than the actual snapshot size (14.7G)? I don''t have this problem on my others filesystems. Thanks -- This message posted from opensolaris.org
On 01/11/11 11:40 AM, fred wrote:> Hello, > > I''m having a weird issue with my incremental setup. > > Here is the filesystem as it shows up with zfs list: > > NAME USED AVAIL REFER MOUNTPOINT > Data/FS1 771M 16.1T 116M /Data/FS1 > Data/FS1 at 05 10.3G - 1.93T - > Data/FS1 at 06 14.7G - 1.93T - > Data/FS1 at 07 0 - 1.93T - > > Everyday, i sync this filesystem remotely with : zfs send -I X Y | ssh blab at blah zfs receive Z > > Now, i''m having hard time transferring @06 to @07. So i tried to copy the stream directly on the local filesystem to find out that the size of the stream was more than 50G! > > Anyone know why my stream is way bigger than the actual snapshot size (14.7G)? I don''t have this problem on my others filesystems. >Compression? -- Ian.
No compression, no dedup. I also forgot to mention it''s on svn_134 -- This message posted from opensolaris.org
On Mon, Jan 10, 2011 at 2:40 PM, fred <fred at mautadine.com> wrote:> Hello, > > I''m having a weird issue with my incremental setup. > > Here is the filesystem as it shows up with zfs list: > > NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?USED ?AVAIL ?REFER ?MOUNTPOINT > Data/FS1 ? ? ? ? ? ? ? ? ? ? ? ? ? 771M ?16.1T ? 116M ?/Data/FS1 > Data/FS1 at 05 ? ? ? ? ? ? ? ? ? ? 10.3G ? ? ?- ?1.93T ?- > Data/FS1 at 06 ? ? ? ? ? ? ? ? ? ? 14.7G ? ? ?- ?1.93T ?- > Data/FS1 at 07 ? ? ? ? ? ? ? ? ? ? ? ? ? ?0 ? ? ?- ?1.93T ?- > > Everyday, i sync this filesystem remotely with : zfs send -I X Y | ssh blab at blah zfs receive Z > > Now, i''m having hard time transferring @06 to @07. So i tried to copy the stream directly on the local filesystem to find out that the size of the stream was more than 50G! > > Anyone know why my stream is way bigger than the actual snapshot size (14.7G)? I don''t have this problem on my others filesystems.I think you are confused because the idea of "actual snapshot size" is not well defined. The stream size for a given snapshot is approximately the space that''s "new to that snapshot", which is not readily available. However, it is is somewhere between the snapshot''s Unique space and its Referenced space. The Used space for a snapshot is the same as its Unique space. In your case, the stream size for @07 is 50GB, which is between the Unique space (0) and the Referenced space (1.9TB). We can actually put some more constraints on the stream size: 1. It must be more than the Unique space 2. It must be less than the Referenced space 3. It must be more than (the Referenced space) - (the previous snapshot''s Referenced space) 4. It must be less than the filesystem''s UsedBySnaps property. For more information, see the zfs(1m) manpage: Native Properties ... used ... When snapshots (see the "Snapshots" section) are cre? ated, their space is initially shared between the snap? shot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot?s space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. --matt ps. I assume that your filesystem (Data/FS1) is a clone, since its Used is less than the snapshots'' Referenced (and even their Unique/Used!). The above information applies to clones same as any other filesystem.
Thanks for this explanation So there is no real way to estimate the size of the increment? Anyway, for this particular filesystem, i''ll stick with rsync and yes, the difference was 50G! Thanks -- This message posted from opensolaris.org
On Thu, Jan 13, 2011 at 4:36 AM, fred <fred at mautadine.com> wrote:> Thanks for this explanation > > So there is no real way to estimate the size of the increment?Unfortunately not for now.> Anyway, for this particular filesystem, i''ll stick with rsync and yes, the difference was 50G!Why? I would expect rsync to be slower and send more data, and also not be able to estimate how large the stream will be. --matt
Well, in this case, the rsync sent data is about the size of the "USED" column in "zfs list -t snapshot" while the zfs stream is 4 times bigger. Also, with rsync, if it fails in the middle, i don''t have to start over. -- This message posted from opensolaris.org