Lieven De Geyndt
2006-Sep-06 11:00 UTC
[zfs-discuss] How to destroy a pool wich you can''t import because it is in faulted state
When a pool is in a faulted state , you can''t import it . Even -f fails . When you to decide to recreate the pool , you cannot execute zpool destroy , because it is not imported . Also -f does not work . Any idea how to get out of this situation ? This message posted from opensolaris.org
Robert Milkowski
2006-Sep-06 11:47 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Hi. Just re-create it or create new pool with disks from the old one and use -f flag. This message posted from opensolaris.org
James C. McPherson
2006-Sep-06 11:53 UTC
[zfs-discuss] How to destroy a pool wich you can''t import because it is in faulted state
Lieven De Geyndt wrote:> When a pool is in a faulted state , you can''t import it . Even -f fails . > When you to decide to recreate the pool , you cannot execute zpool destroy , because it is not imported . Also -f does not work . > Any idea how to get out of this situation ?try something like zpool create -R /alternate_root newpoolname vdevlist You might need to add a "-f", but try it without "-f" first. cheers, James C. McPherson
Lieven De Geyndt
2006-Sep-06 13:36 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
zpool create -R did his job . Thanks for the tip . Is ther a way to disable the auto mount when you boot a system ? The customer has some kind of poor mans cluster . 2 systems has access to a SE3510 with ZFS . System A was powered-off as test , system B did an import of the pools . When system A rebooted , this system tries to import his pools , so 2 systems are accessing the same pool . Probably this caused a corruption in his pool . So how to disable automount of zfs pools ? This message posted from opensolaris.org
James C. McPherson
2006-Sep-06 13:55 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Lieven De Geyndt wrote:> zpool create -R did his job . Thanks for the tip . > > Is ther a way to disable the auto mount when you boot a system ? > The customer has some kind of poor mans cluster . > 2 systems has access to a SE3510 with ZFS . > System A was powered-off as test , system B did an import of the pools . > When system A rebooted , this system tries to import his pools , so 2 > systems are accessing the same pool . Probably this caused a corruption in his pool . > So how to disable automount of zfs pools ?Oh heck .... PMC 0.0.0alpha again :( How about # zfs set mountpoint=none fsname James C. McPherson
Robert Milkowski
2006-Sep-06 14:17 UTC
[zfs-discuss] Re: Re: How to destroy a pool wich you can''t import
This could still corrupt the pool. Probably the customer has to write its own tool to import a pool using libzfs and not creating zpool.cache. Eventually just after pool is imported remove zpool.cache - I''m not sure but it should work. This message posted from opensolaris.org
Lieven De Geyndt
2006-Sep-06 14:19 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
sorry guys ...RTF did the job [b]Legacy Mount Points[/b] You can manage ZFS ?le systems with legacy tools by setting the mountpoint property to legacy. Legacy ?le systems must be managed through the mount and umount commands and the /etc/vfstab ?le. ZFS does not automatically mount legacy ?le systems on boot, and the ZFS mount and umount command do not operate on datasets of this type. The following examples show how to set up and manage a ZFS dataset in legacy mode: # zfs set mountpoint=legacy tank/home/eschrock # mount -F zfs tank/home/eschrock /mnt In particular, if you have set up separate ZFS /usr or /var ?le systems, you must indicate that theyare legacy ?le systems. In addition, you must mount them by creating entries in the /etc/vfstab ?le. Otherwise, the system/filesystem/local service enters maintenance mode when the system boots. To automatically mount a legacy ?le system on boot, you must add an entry to the /etc/vfstab ?le. The following example shows what the entry in the /etc/vfstab ?le might look like: #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # tank/home/eschrock - /mnt zfs - yes - Note that the device to fsck and fsck pass entries are set to -. This syntax is because the fsck command is not applicable to ZFS ?le systems. For more information regarding data integrity and the lack of need for fsck in ZFS This message posted from opensolaris.org
Frank Cusack
2006-Sep-06 16:55 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
On September 6, 2006 7:19:32 AM -0700 Lieven De Geyndt <lieven at sun.com> wrote:> sorry guys ...RTF did the job > > [b]Legacy Mount Points[/b]That just means filesystems in the pool won''t get mounted, not that the pool won''t be imported. -frank
Lieven De Geyndt
2006-Sep-07 08:06 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
So I can manage the file system mounts/automounts using the legacy option , but I can''t manage the auto-import of the pools . Or I should delete the zpool.cache file during boot . This message posted from opensolaris.org
James C. McPherson
2006-Sep-07 08:55 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Lieven De Geyndt wrote:> So I can manage the file system mounts/automounts using the legacy option > , but I can''t manage the auto-import of the pools . Or I should delete > the zpool.cache file during boot .Doesn''t this come back to the problem which is self-induced, namely that they are trying "poor man''s cluster" ?? If you want cluster functionality then pay for a proper solution. If you can''t afford a proper solution then you will *always* get hurt when you come up against a problem of your own making. I saw this scenario *many* times while working in Sun''s CPRE and PTS organisations. Save yourself the hassle and do things right from the start. James C. McPherson
Frank Cusack
2006-Sep-07 09:52 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
On September 7, 2006 6:55:48 PM +1000 "James C. McPherson" <James.C.McPherson at gmail.com> wrote:> Doesn''t this come back to the problem which is self-induced, namely > that they are trying "poor man''s cluster" ?? > > If you want cluster functionality then pay for a proper solution. > If you can''t afford a proper solution then you will *always* get > hurt when you come up against a problem of your own making. > > I saw this scenario *many* times while working in Sun''s CPRE and > PTS organisations. > > Save yourself the hassle and do things right from the start.AIUI, there is no zfs cluster option today. SC3.2 (with HA-ZFS) is only in beta. So, it can''t be done "right" from the start with zfs. [I''m not disagreeing with you, though.] -frank
Lieven De Geyndt
2006-Sep-07 10:09 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
I know this is not supported . But we try to build a safe configuration , till zfs is supported in Sun cluster . The customer did order SunCluster , but needs a workarround till the release date . And I think it must be possible to setup . This message posted from opensolaris.org
James C. McPherson
2006-Sep-07 11:44 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Lieven De Geyndt wrote:> I know this is not supported . But we try to build a safe configuration, > till zfs is supported in Sun cluster. The customer did order SunCluster, > but needs a workarround till the release date . And I think it must be > possible to setup .So build them a configuration which works and is supported today, and design it so the migration plan which you also provide them makes it reasonably pain-free to move to HA-ZFS when sc3.2 is released. James
Robert Milkowski
2006-Sep-07 12:46 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Hello James, Thursday, September 7, 2006, 1:44:48 PM, you wrote: JCM> Lieven De Geyndt wrote:>> I know this is not supported . But we try to build a safe configuration, >> till zfs is supported in Sun cluster. The customer did order SunCluster, >> but needs a workarround till the release date . And I think it must be >> possible to setup .JCM> So build them a configuration which works and is supported today, and JCM> design it so the migration plan which you also provide them makes it JCM> reasonably pain-free to move to HA-ZFS when sc3.2 is released. Yep. Few days ago I did migrate two servers with 40TB ZFS production data from non cluster config to SC with HA-ZFS with minimal downtime (one node cluster, zpool import, add second node cluster, put pools under SC). Basically it worked however with some problems. Right now it works perfectly :) -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
James C. McPherson
2006-Sep-07 12:56 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import because it is in faulted st
Robert Milkowski wrote:> Hello James, > > Thursday, September 7, 2006, 1:44:48 PM, you wrote: > > JCM> Lieven De Geyndt wrote: >>> I know this is not supported . But we try to build a safe configuration, >>> till zfs is supported in Sun cluster. The customer did order SunCluster, >>> but needs a workarround till the release date . And I think it must be >>> possible to setup . > > JCM> So build them a configuration which works and is supported today, and > JCM> design it so the migration plan which you also provide them makes it > JCM> reasonably pain-free to move to HA-ZFS when sc3.2 is released. > > Yep. > > Few days ago I did migrate two servers with 40TB ZFS production data > from non cluster config to SC with HA-ZFS with minimal downtime (one > node cluster, zpool import, add second node cluster, put pools under > SC). Basically it worked however with some problems. Right now it > works perfectly :)Excellent! James
Darren Dunham
2006-Sep-07 18:32 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
> Lieven De Geyndt wrote: > > So I can manage the file system mounts/automounts using the legacy option > > , but I can''t manage the auto-import of the pools . Or I should delete > > the zpool.cache file during boot . > > Doesn''t this come back to the problem which is self-induced, namely > that they are trying "poor man''s cluster" ??Hmm. I worry that something similar could occur in certain failure situations. I know that VxVM stores the "autoimport" information on the disk itself. It sounds like ZFS doesn''t and it''s only in the cache (is this correct?) Lets imagine that I lose a motherboard on a SAN host and it crashes. To get things going I import the pool on another host and run the apps while I repair the first one. Hardware guy comes in and swaps the motherboard, then lets the machine boot. While it boots, will it try to re-import the pool it had before it crashed? Will it succeed? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Eric Schrock
2006-Sep-07 18:50 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
On Thu, Sep 07, 2006 at 11:32:18AM -0700, Darren Dunham wrote:> > I know that VxVM stores the "autoimport" information on the disk > itself. It sounds like ZFS doesn''t and it''s only in the cache (is this > correct?)I''m not sure what ''autoimport'' is, but ZFS always stores enough information on the disks to open the pool, provided all the devices (or at least one device from each toplevel vdev) can be scanned. The cache simply provides a list of known pools and their approximate configuration, so that we don''t have to scan every device (and every file) on boot to know where pools are located. It''s import to distinguish between ''opening'' a pool and ''importing'' a pool. Opening a pool involves reading the data off disk and constructing the in-core representation of the pool. It doesn''t matter if this data comes from the cache, from on-disk, or out of thin air. Importing a pool is an intentional action which reconstructs the pool configuration from on-disk data, which it then uses to open the pool.> Lets imagine that I lose a motherboard on a SAN host and it crashes. To > get things going I import the pool on another host and run the apps > while I repair the first one. Hardware guy comes in and swaps the > motherboard, then lets the machine boot. While it boots, will it try to > re-import the pool it had before it crashed? Will it succeed?Yes, it will open every pool that it has in the cache. Fundamentally, this is operator error. We have talked about storing the hostid of the last machine to open the pool to detect this case, but then we''ve also talked about ways of sharing snapshots from the same pool read-only to multiple hosts. So it''s not clear that "hostid != self" is a valid check when opening a pool, and it would also make failback somewhat more complicated. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Frank Cusack
2006-Sep-07 20:10 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
On September 7, 2006 11:50:43 AM -0700 Eric Schrock <eric.schrock at sun.com> wrote:> On Thu, Sep 07, 2006 at 11:32:18AM -0700, Darren Dunham wrote: >> >> Lets imagine that I lose a motherboard on a SAN host and it crashes. To >> get things going I import the pool on another host and run the apps >> while I repair the first one. Hardware guy comes in and swaps the >> motherboard, then lets the machine boot. While it boots, will it try to >> re-import the pool it had before it crashed? Will it succeed? > > Yes, it will open every pool that it has in the cache. Fundamentally, > this is operator error.That zfs needs to address. What if I simply lose power to one of the hosts, and then power is restored? -frank
Eric Schrock
2006-Sep-07 20:22 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
On Thu, Sep 07, 2006 at 01:09:47PM -0700, Frank Cusack wrote:> > That zfs needs to address. > > What if I simply lose power to one of the hosts, and then power is restored?Then use a layered clustering product - that''s what this is for. For example, SunCluster doesn''t use the cache file in the traditional way, and will make sure the host coming back up doesn''t access the pool before it is able to do so. If you are going to ''roll your own'' clustering, then you will need to come up with some appropriate conversation between the two hosts to know when it is OK to come completely up. You can use alternate root pools (with ''/'' they become effectively temporary) and allow the host to come all the way up before having the failback conversation with the other host before explicitly importing the pool. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Darren Dunham
2006-Sep-07 20:52 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
> > I know that VxVM stores the "autoimport" information on the disk > > itself. It sounds like ZFS doesn''t and it''s only in the cache (is this > > correct?) > > I''m not sure what ''autoimport'' is, but ZFS always stores enough > information on the disks to open the pool, provided all the devices (or > at least one device from each toplevel vdev) can be scanned. The cache > simply provides a list of known pools and their approximate > configuration, so that we don''t have to scan every device (and every > file) on boot to know where pools are located.Autoimport is the mechanism that VxVM currently uses for deciding to import a diskgroup at boot time. When the host launches, it reads a ''hostid'' from a configfile (usually, but not always the same as the hostname). When scanning disks/groups on visible disks, two of the visible items are a ''hostid'' and the ''autoimport'' flag. If the autoimport flag is set and the hostid in the group matches the hostid in the config file, and the group is otherwise healthy, the group is imported. Any cluster solution would leave the autoimport flag off so that any use of the group was only through the action of the cluster. If the machine were to boot unexpectedly, then with or without the cluster software it would not try to import the group. (There is also a further mechanism of a ''imported'' flag. You must force the import if the group appears to already be imported elsewhere.)> It''s import to distinguish between ''opening'' a pool and ''importing'' a > pool. Opening a pool involves reading the data off disk and > constructing the in-core representation of the pool. It doesn''t matter > if this data comes from the cache, from on-disk, or out of thin air. > > Importing a pool is an intentional action which reconstructs the pool > configuration from on-disk data, which it then uses to open the pool. > > > Lets imagine that I lose a motherboard on a SAN host and it crashes. To > > get things going I import the pool on another host and run the apps > > while I repair the first one. Hardware guy comes in and swaps the > > motherboard, then lets the machine boot. While it boots, will it try to > > re-import the pool it had before it crashed? Will it succeed? > > Yes, it will open every pool that it has in the cache. Fundamentally, > this is operator error. We have talked about storing the hostid of the > last machine to open the pool to detect this case, but then we''ve also > talked about ways of sharing snapshots from the same pool read-only to > multiple hosts. So it''s not clear that "hostid != self" is a valid > check when opening a pool, and it would also make failback somewhat more > complicated.What are the problems that you see with that check? It appears similar to what VxVM has been using (although they do not use the `hostid` as the field), and that appears to have worked well in most cases. I don''t know what issues appear with multiple hosts. My worry is that an accidental import would allow two machines to update uberblock and other metadata to the point that you get corruption. If the "sharing" hosts get read-only access and never touch the metadata, then (for me) the hostname check becomes much less relevant. If they want to import it, fine...but don''t corrupt anything. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Eric Schrock
2006-Sep-07 21:09 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
On Thu, Sep 07, 2006 at 01:52:33PM -0700, Darren Dunham wrote:> > What are the problems that you see with that check? It appears similar > to what VxVM has been using (although they do not use the `hostid` as > the field), and that appears to have worked well in most cases. > > I don''t know what issues appear with multiple hosts. My worry is that > an accidental import would allow two machines to update uberblock and > other metadata to the point that you get corruption. If the "sharing" > hosts get read-only access and never touch the metadata, then (for me) > the hostname check becomes much less relevant. If they want to import > it, fine...but don''t corrupt anything. >I agree that it''s a useful check against accidental mistakes - as long we''re not talking about some built in clustering behavior. We just haven''t thought through what the experience should be. In particular, there are some larger issues in relation to FMA that need to be address. For example, we would want the pool to show up as faulted, but there needs to be a consistent way to ''repair'' such a pool. Should it be an extension of ''zpool clear'', or should it be done through ''fmadam repair''? Or even ''zpool export'' followed by ''zpool import''? We''re going to have to answer these questions for the next phase of ZFS/FMA interaction, so maybe it would be a good time to think about this problem as well. And of course, you''ll always be able to shoot yourself in the foot if you try, either by ''repairing'' a pool that''s actively shared, or by force-importing a pool that''s actively in use somewhere else. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Sanjay Nadkarni
2006-Sep-07 22:04 UTC
[zfs-discuss] Re: How to destroy a pool wich you can''t import
Darren Dunham wrote:>>>I know that VxVM stores the "autoimport" information on the disk >>>itself. It sounds like ZFS doesn''t and it''s only in the cache (is this >>>correct?) >>> >>> >>I''m not sure what ''autoimport'' is, but ZFS always stores enough >>information on the disks to open the pool, provided all the devices (or >>at least one device from each toplevel vdev) can be scanned. The cache >>simply provides a list of known pools and their approximate >>configuration, so that we don''t have to scan every device (and every >>file) on boot to know where pools are located. >> >> > >Autoimport is the mechanism that VxVM currently uses for deciding to >import a diskgroup at boot time. > >When the host launches, it reads a ''hostid'' from a configfile (usually, >but not always the same as the hostname). When scanning disks/groups on >visible disks, two of the visible items are a ''hostid'' and the >''autoimport'' flag. If the autoimport flag is set and the hostid in the >group matches the hostid in the config file, and the group is otherwise >healthy, the group is imported. > >Any cluster solution would leave the autoimport flag off so that any use >of the group was only through the action of the cluster. If the machine >were to boot unexpectedly, then with or without the cluster software it >would not try to import the group. (There is also a further mechanism >of a ''imported'' flag. You must force the import if the group appears to >already be imported elsewhere.) > > >SVM puts a SCSI-2 reservation on the drives that are part of the a diskset. However in a clustered mode this is not asserted. However putting a reservation is not the best option. Does not work on SATA drives :-) -Sanjay>>It''s import to distinguish between ''opening'' a pool and ''importing'' a >>pool. Opening a pool involves reading the data off disk and >>constructing the in-core representation of the pool. It doesn''t matter >>if this data comes from the cache, from on-disk, or out of thin air. >> >>Importing a pool is an intentional action which reconstructs the pool >>configuration from on-disk data, which it then uses to open the pool. >> >> >> >>>Lets imagine that I lose a motherboard on a SAN host and it crashes. To >>>get things going I import the pool on another host and run the apps >>>while I repair the first one. Hardware guy comes in and swaps the >>>motherboard, then lets the machine boot. While it boots, will it try to >>>re-import the pool it had before it crashed? Will it succeed? >>> >>> >>Yes, it will open every pool that it has in the cache. Fundamentally, >>this is operator error. We have talked about storing the hostid of the >>last machine to open the pool to detect this case, but then we''ve also >>talked about ways of sharing snapshots from the same pool read-only to >>multiple hosts. So it''s not clear that "hostid != self" is a valid >>check when opening a pool, and it would also make failback somewhat more >>complicated. >> >> > >What are the problems that you see with that check? It appears similar >to what VxVM has been using (although they do not use the `hostid` as >the field), and that appears to have worked well in most cases. > >I don''t know what issues appear with multiple hosts. My worry is that >an accidental import would allow two machines to update uberblock and >other metadata to the point that you get corruption. If the "sharing" >hosts get read-only access and never touch the metadata, then (for me) >the hostname check becomes much less relevant. If they want to import >it, fine...but don''t corrupt anything. > > >
Anton B. Rang
2006-Sep-08 01:07 UTC
[zfs-discuss] Re: Re: How to destroy a pool wich you can''t import
A determined administrator can always get around any checks and cause problems. We should do our very best to prevent data loss, though! This case is particularly bad since simply booting a machine can permanently damage the pool. And why would we want a pool imported on another host, or not marked as belonging to this host, to show up as faulted? That seems an odd use of the word. Unavailable, perhaps, but not faulted. This message posted from opensolaris.org
Darren Dunham
2006-Sep-08 01:31 UTC
[zfs-discuss] Re: Re: How to destroy a pool wich you can''t import
> And why would we want a pool imported on another host, or not marked > as belonging to this host, to show up as faulted? That seems an odd > use of the word. Unavailable, perhaps, but not faulted.It certainly changes some semantics... In a UFS/VxVM world, I still have filesystems referenced in /etc/vfstab. I might expect (although have seen counterexamples), that if my VxVM group doesn''t autoimport, then obviously my filesystems don''t mount, and that will halt startup until I deal with the problem. This is often a good thing. With ZFS and non-legacy mounts, I don''t really have a statement that the ZFS filesystem /path/to/critical/resource must be mounted at boot time other than the configuration of the pool. I guess I need to make some more explicit dependencies for services if I want some of them to notice. (Unfortunately creating/removing dependences takes a bit more work than maintaining a vfstab today). -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Eric Schrock
2006-Sep-08 01:34 UTC
[zfs-discuss] Re: Re: How to destroy a pool wich you can''t import
On Thu, Sep 07, 2006 at 06:07:40PM -0700, Anton B. Rang wrote:> > And why would we want a pool imported on another host, or not marked > as belonging to this host, to show up as faulted? That seems an odd > use of the word. Unavailable, perhaps, but not faulted. >That''s FMA terminology, and besides wanting to stay within the same framework, I believe it is correct. If you have booted a machines which claims to be the owner of a pool, only to find that it has since been actively opened on another host, this is administrator misconfiguration. As such, the pool is faulted, and an FMA message explaining what has happened, along with a link to a more detailed knowledge article explaining how to fix it, will be generated. The term ''faulted'' is specific FMA terminology, and carries with it many desired semantics (such as showing up in the FMA resource cache). Silently ignoring failure in this case is not an option. If you want this silent behavior, you should be using some combination of clustering software to provide higher level abstractions of ownership besides ''zpool.cache''. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock
2006-Sep-08 02:21 UTC
[zfs-discuss] Re: Re: How to destroy a pool wich you can''t import
On Thu, Sep 07, 2006 at 06:31:30PM -0700, Darren Dunham wrote:> > It certainly changes some semantics... > > In a UFS/VxVM world, I still have filesystems referenced in /etc/vfstab. > I might expect (although have seen counterexamples), that if my VxVM > group doesn''t autoimport, then obviously my filesystems don''t mount, and > that will halt startup until I deal with the problem. This is often a > good thing. > > With ZFS and non-legacy mounts, I don''t really have a statement that the > ZFS filesystem /path/to/critical/resource must be mounted at boot time > other than the configuration of the pool. I guess I need to make some > more explicit dependencies for services if I want some of them to > notice. (Unfortunately creating/removing dependences takes a bit more > work than maintaining a vfstab today).Currently, a faulted pool will not prevent you from coming up (that is, filesystem/* services will continue to come up). There are already some folks thinking about how failed /etc/vfstab mounts should affect boot (not everyone wants it to fail coming up). Similar thought should probably be given to what it means for a faulted pool and/or dataset. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock