Trying to destroy a pool that has 10 file systems and over 200,000 snapshots created (about 20,000 per file system). Do I now have to destroy each snapshot individually in order to destroy the pool? zpool destroy -f ev Assertion failed: errno == ENOMEM, file ../common/libzfs_util.c, line 114, function no_memory Abort (core dumped) -bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT ev 204G 88.5G 115G 43% ONLINE - This message posted from opensolaris.org
Chuck - What bits are you running? Can you send me the resulting corefile? This is definitely a bug, but I''d need to see the corefile to determine exactly what''s going on. - Eric On Mon, Apr 24, 2006 at 08:19:21AM -0700, Chuck Gehr wrote:> Trying to destroy a pool that has 10 file systems and over 200,000 snapshots created (about 20,000 per file system). Do I now have to destroy each snapshot individually in order to destroy the pool? > > zpool destroy -f ev > Assertion failed: errno == ENOMEM, file ../common/libzfs_util.c, line 114, function no_memory > Abort (core dumped) > -bash-3.00# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ev 204G 88.5G 115G 43% ONLINE - > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Mon, Apr 24, 2006 at 09:11:27AM -0700, Eric Schrock wrote:> Chuck - > > What bits are you running? Can you send me the resulting corefile? > This is definitely a bug, but I''d need to see the corefile to determine > exactly what''s going on.This turns out to be a known problem with the way we iterate over datasets when unmounting them for a ''zpool destroy'' or ''zpool export''. We rely on zfs_iter_dependents() to iterate over things in the correct order, when in fact we should be parsing /etc/mnttab and iterating over ZFS datasets within the pool in reverse hierarchical order. zfs_iter_dependents() is very expensive, as it does a full topological sort of the ZFS dataset hierarchy in order to find dependencies between clones. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock