Reid Spencer
2007-Dec-03 19:28 UTC
[zfs-discuss] What does "dataset is busy" actually mean?
Hello, I''m trying to track down a problem with taking zfs snapshots. On occasion the zfs command will report: cannot snapshot ''<dataset name>'': dataset is busy The problem is, I don''t know what causes zfs to think the data set is "busy". Anyone out there know what constitutes a busy dataset? I did some testing and it seems that mounting, nfs sharing and writing to the dataset don''t seem to affect its "busy" status. That is, I can take a snapshot of a dataset that is mounted, in use by nfs, and being written continuously. This leaves me a little puzzled at the definition of "busy". Any help would be greatly appreciated. Thanks, Reid Spencer illumita.com This message posted from opensolaris.org
I''ve hit the problem myself recently, and mounting the filesystem cleared something in the brains of ZFS and alowed me to snapshot. http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg00812.html PS: I''ll use Google before asking some questions, a''la (C) Bart Simpson That''s how I found your question ;) This message posted from opensolaris.org
Reid Spencer
2008-Jan-12 10:21 UTC
[zfs-discuss] What does "dataset is busy" actually mean?
Yes, it seems that mounting it and unmounting it with the zfs command clears the condition and allows the data set to be destroyed. Seems this is a bug in zfs, or at least an annoyance. I verified with fuser that no processes were using the file system. Now, what I''d really like to know, is what causes a dataset to get into this state? This message posted from opensolaris.org
Reid Spencer
2008-Jan-12 10:22 UTC
[zfs-discuss] What does "dataset is busy" actually mean?
Hmm, actually, not. I just ran into a dataset where the mount/unmount doesn''t clear the condition. I still get "dataset is busy" when attempting to destroy it. This message posted from opensolaris.org
Rob Logan
2008-Jan-12 15:55 UTC
[zfs-discuss] What does "dataset is busy" actually mean? [creating snap]
> what causes a dataset to get into this state?while I''m not exactly sure, I do have the steps leading up to when I saw it trying to create a snapshot. ie: 10 % zfs snapshot z/b80nd/var at 0107nd cannot create snapshot ''z/b80nd/var at 0107nd'': dataset is busy 13 % mount -F zfs z/b80nd/var /z/b80nd/var mount: Mount point /z/b80nd/var does not exist. 14 % mount -F zfs z/b80nd/var /mnt 15 % zfs snapshot -r z/b80nd at 0107nd 16 % zfs list | grep 0107 root/0107nd 455M 107G 6.03G legacy root/b80nd at 0107nd 50.5M - 6.02G - z/b80nd at 0107nd 0 - 243M - z/b80nd/opt at 0107nd 0 - 1.18G - z/b80nd/usr at 0107nd 0 - 2.25G - z/b80nd/var at 0107nd 0 - 56.3M - running 64bit opensol-20080107 on intel to get there I was walking through this cookbook: zfs snapshot root/b80nd at 0107nd zfs clone root/b80nd at 0107nd root/0107nd cat /etc/vfstab | sed s/^root/#root/ | sed s/^z/#z/ > /root/0107nd/ etc/vfstab echo "root/0107nd - / zfs - no -" >> /root/0107nd/etc/vfstab cat /root/0107nd/etc/vfstab zfs snapshot -r z/b80nd at dump rsync -a --del --verbose /usr/.zfs/snapshot/dump/ /root/0107nd/usr rsync -a --del --verbose /opt/.zfs/snapshot/dump/ /root/0107nd/opt rsync -a --del --verbose /var/.zfs/snapshot/dump/ /root/0107nd/var zfs set mountpoint=legacy root/0107nd zpool set bootfs=root/0107nd root reboot mkdir -p /z/tmp/bfu ; cd /z/tmp/bfu wget http://dlc.sun.com/osol/on/downloads/20080107/SUNWonbld.i386.tar.bz2 bzip2 -d -c SUNWonbld.i386.tar.bz2 | tar -xvf - pkgadd -d onbld wget http://dlc.sun.com/osol/on/downloads/20080107/on-bfu-nightly-osol-nd.i386.tar.bz2 bzip2 -d -c on-bfu-nightly-osol-nd.i386.tar.bz2 | tar -xvf - setenv FASTFS /opt/onbld/bin/i386/fastfs setenv BFULD /opt/onbld/bin/i386/bfuld setenv GZIPBIN /usr/bin/gzip /opt/onbld/bin/bfu /z/tmp/bfu/archives-nightly-osol-nd/i386 /opt/onbld/bin/acr echo etc/zfs/zpool.cache >> /boot/solaris/filelist.ramdisk ; echo bug in bfu reboot rm -rf /bfu* /.make* /.bfu* zfs snapshot root/0107nd at dump mount -F zfs z/b80nd/var /mnt ; echo bug in zfs zfs snapshot -r z/b80nd at 0107nd zfs clone z/b80nd at 0107nd z/0107nd zfs set compression=lzjb z/0107nd zfs clone z/b80nd/usr at 0107nd z/0107nd/usr zfs clone z/b80nd/var at 0107nd z/0107nd/var zfs clone z/b80nd/opt at 0107nd z/0107nd/opt rsync -a --del --verbose /.zfs/snapshot/dump/ /z/0107nd zfs set mountpoint=legacy z/0107nd/usr zfs set mountpoint=legacy z/0107nd/opt zfs set mountpoint=legacy z/0107nd/var echo "z/0107nd/usr - /usr zfs - yes -" >> /etc/vfstab echo "z/0107nd/var - /var zfs - yes -" >> /etc/vfstab echo "z/0107nd/opt - /opt zfs - yes -" >> /etc/vfstab reboot heh heh, booting from a clone of a clone... waisted space under root/`uname -v`/usr for a few libs needed at boot, but having /usr /var /opt on the compressed pool with two raidz vdevs boots to login in 45secs rather than 52secs on the single vdev root pool.