I have been playing with snapshots and backups after bfu''ing to 20060313, and noticed this oddity: # zfs create snapshot mailtank/stuff at 20060315 # zfs backup mailtank/stuff at 20060315 > /mailtank/test/stuff.backup cannot write backup stream: File too large # ls -l /mailtank/test/stuff.backup -rw-r--r-- 1 root root 2147483647 Mar 15 17:05 /mailtank/test/stuff.backup truss says: 100760: open("/dev/zfs", O_RDWR) = 4 100760: fstat64(4, 0x08046BF0) = 0 100760: stat64("/dev/pts/0", 0x08046D00) = 0 100760: ioctl(4, ZFS_IOC_OBJSET_STATS, 0x08046E1C) = 0 100760: ioctl(4, ZFS_IOC_SENDBACKUP, 0x08046E5C) Err#27 EFBIG this happens writing to both ZFS and UFS file systems (UFS is mounted with ''largefile'', of course). writing to a pipe works fine. is anyone else seeing this? grant.
Hmm, looks like this occurs only on 32-bit kernels, but all the time there when writing a file larger than 2GB. We aren''t generating this errno within ZFS so it probably has to do with how we''re interacting with the rest of the system. I''ll look into it and report back here. I''ve filed the following bug to track this issue: 6398622 ''zfs backup > file'' can get ''file too large'' error on 32-bit systems Thanks for running (and breaking) ZFS! --matt On Wed, Mar 15, 2006 at 05:28:12PM +1100, grant beattie wrote:> I have been playing with snapshots and backups after bfu''ing to > 20060313, and noticed this oddity: > > # zfs create snapshot mailtank/stuff at 20060315 > # zfs backup mailtank/stuff at 20060315 > /mailtank/test/stuff.backup > cannot write backup stream: File too large > > # ls -l /mailtank/test/stuff.backup > -rw-r--r-- 1 root root 2147483647 Mar 15 17:05 /mailtank/test/stuff.backup > > truss says: > > 100760: open("/dev/zfs", O_RDWR) = 4 > 100760: fstat64(4, 0x08046BF0) = 0 > 100760: stat64("/dev/pts/0", 0x08046D00) = 0 > 100760: ioctl(4, ZFS_IOC_OBJSET_STATS, 0x08046E1C) = 0 > 100760: ioctl(4, ZFS_IOC_SENDBACKUP, 0x08046E5C) Err#27 EFBIG > > this happens writing to both ZFS and UFS file systems (UFS is mounted > with ''largefile'', of course). writing to a pipe works fine. > > is anyone else seeing this? > > grant. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
grant beattie wrote:> I have been playing with snapshots and backups after bfu''ing to > 20060313, and noticed this oddity: > > # zfs create snapshot mailtank/stuff at 20060315 > # zfs backup mailtank/stuff at 20060315 > /mailtank/test/stuff.backup > cannot write backup stream: File too largeThis looks like the result of a non-large-file-aware shell. What shell do you use, and have you tried it with other Solaris shells? Rob T
On Wed, Mar 15, 2006 at 06:17:21AM -0700, Robert Thurlow wrote:> ># zfs create snapshot mailtank/stuff at 20060315 > ># zfs backup mailtank/stuff at 20060315 > /mailtank/test/stuff.backup > >cannot write backup stream: File too large > > This looks like the result of a non-large-file-aware shell. What > shell do you use, and have you tried it with other Solaris shells?Robert, I initially suspected this, too, so I tried it with the Sun supplied SUNWtcsh, /bin/csh and /bin/ksh and all fail in the same way. however, all of these shells can write large files files using "cat > file", so it was specific to redirecting "zfs backup" output. the issue can be worked around by running "zfs backup | cat > filename", but I figured this was still a bug which would want to be filed and fixed. grant.
On Wed, Mar 15, 2006 at 01:07:16AM -0800, Matthew Ahrens wrote:> Hmm, looks like this occurs only on 32-bit kernels, but all the time > there when writing a file larger than 2GB. We aren''t generating this > errno within ZFS so it probably has to do with how we''re interacting > with the rest of the system. I''ll look into it and report back here. > > I''ve filed the following bug to track this issue: > 6398622 ''zfs backup > file'' can get ''file too large'' error on 32-bit systems > > Thanks for running (and breaking) ZFS!thanks, Matt. I also noticed this just this morning: # zfs list mailtank/stuff at 20060315 NAME USED AVAIL REFER MOUNTPOINT mailtank/stuff at 20060315 18K - 2.75G - however, the file written from "zfs backup" is 3.7G in size. the filesystem has compression turned on, but there is only 3.3G (uncompressed) on it, so this alone doesn''t explain the backup size discrepancy. any ideas where the extra bits might be coming from? are there any tools apart from "zfs restore" to inspect backups? grant.
On Thu, Mar 16, 2006 at 09:45:50AM +1100, grant beattie wrote:> I also noticed this just this morning: > > # zfs list mailtank/stuff at 20060315 > NAME USED AVAIL REFER MOUNTPOINT > mailtank/stuff at 20060315 18K - 2.75G - > > however, the file written from "zfs backup" is 3.7G in size. > > the filesystem has compression turned on, but there is only 3.3G > (uncompressed) on it, so this alone doesn''t explain the backup > size discrepancy. > > any ideas where the extra bits might be coming from? are there any > tools apart from "zfs restore" to inspect backups?The extra 12% is probably due to the fact that the backup stream has a bunch of padding in each record. The data is not stored compressed, because you probably want to apply a more expensive compression algorithm to the whole stream (eg. ''zfs backup | gzip > file.zbak.gz'') There are not currently any tools for examining a zfs backup, other than ''zfs restore -nv''. But that reminds me, I have one in my home directory that I should get around to integrating (bugid 6399128). --matt ps. FYI, I expect to integrate a fix for 6398622 ''zfs backup > file'' can get ''file too large'' error on 32-bit systems later today.
Thanks to Jeff Bonwick''s quick observation that our infinity wasn''t quite big enough (it was using RLIM_INFINITY rather than RLIM64_INFINITY), a fix for this bug has been putback today and will be part of build 37. Happy ''zfs backup''-ing! --matt On Wed, Mar 15, 2006 at 01:07:16AM -0800, Matthew Ahrens wrote:> Hmm, looks like this occurs only on 32-bit kernels, but all the time > there when writing a file larger than 2GB. We aren''t generating this > errno within ZFS so it probably has to do with how we''re interacting > with the rest of the system. I''ll look into it and report back here. > > I''ve filed the following bug to track this issue: > 6398622 ''zfs backup > file'' can get ''file too large'' error on 32-bit systems > > Thanks for running (and breaking) ZFS! > --matt > > > On Wed, Mar 15, 2006 at 05:28:12PM +1100, grant beattie wrote: > > I have been playing with snapshots and backups after bfu''ing to > > 20060313, and noticed this oddity: > > > > # zfs create snapshot mailtank/stuff at 20060315 > > # zfs backup mailtank/stuff at 20060315 > /mailtank/test/stuff.backup > > cannot write backup stream: File too large > > > > # ls -l /mailtank/test/stuff.backup > > -rw-r--r-- 1 root root 2147483647 Mar 15 17:05 /mailtank/test/stuff.backup > > > > truss says: > > > > 100760: open("/dev/zfs", O_RDWR) = 4 > > 100760: fstat64(4, 0x08046BF0) = 0 > > 100760: stat64("/dev/pts/0", 0x08046D00) = 0 > > 100760: ioctl(4, ZFS_IOC_OBJSET_STATS, 0x08046E1C) = 0 > > 100760: ioctl(4, ZFS_IOC_SENDBACKUP, 0x08046E5C) Err#27 EFBIG > > > > this happens writing to both ZFS and UFS file systems (UFS is mounted > > with ''largefile'', of course). writing to a pipe works fine. > > > > is anyone else seeing this?
On Wed, Mar 15, 2006 at 06:01:07PM -0800, Matthew Ahrens wrote:> Thanks to Jeff Bonwick''s quick observation that our infinity wasn''t > quite big enough (it was using RLIM_INFINITY rather than > RLIM64_INFINITY), a fix for this bug has been putback today and will be > part of build 37.awesome.. thanks Matt and Jeff :-) grant.