I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it''s still shown as mounted. What''s going on here? Should it really be taking this long? $ zfs list tank NAME USED AVAIL REFER MOUNTPOINT tank 1.10T 1.19T 36.7K /tank $ zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t0d0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 errors: No known data errors -- This message posted from opensolaris.org
fyleow wrote:> I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. > > When I do a zfs list tank it''s still shown as mounted. What''s going on here? Should it really be taking this long? > > $ zfs list tank > NAME USED AVAIL REFER MOUNTPOINT > tank 1.10T 1.19T 36.7K /tank > > $ zpool status tank > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > c7t1d0 ONLINE 0 0 0 > c7t2d0 ONLINE 0 0 0 > c7t3d0 ONLINE 0 0 0 > c7t4d0 ONLINE 0 0 0 > > errors: No known data errorsCan you run the following command and post the output: # echo "::pgrep zpool | ::walk thread | ::findstack -v" | mdb -k Thanks, George
> fyleow wrote: > > I have a raidz1 tank of 5x 640 GB hard drives on my > newly installed OpenSolaris 2009.06 system. I did a > zpool export tank and the process has been running > for 3 hours now taking up 100% CPU usage. > > > > When I do a zfs list tank it''s still shown as > mounted. What''s going on here? Should it really be > taking this long? > > > > $ zfs list tank > > NAME USED AVAIL REFER MOUNTPOINT > > tank 1.10T 1.19T 36.7K /tank > > > > $ zpool status tank > > pool: tank > > state: ONLINE > > scrub: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank ONLINE 0 0 0 > > raidz1 ONLINE 0 0 0 > > c7t0d0 ONLINE 0 0 0 > > c7t1d0 ONLINE 0 0 0 > > c7t2d0 ONLINE 0 0 0 > > c7t3d0 ONLINE 0 0 0 > > c7t4d0 ONLINE 0 0 0 > > > > errors: No known data errors > > Can you run the following command and post the > output: > > # echo "::pgrep zpool | ::walk thread | ::findstack > -v" | mdb -k > > > Thanks, > George > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ssHere''s what I get # echo "::pgrep zpool | ::walk thread | ::findstack -v" | mdb -k stack pointer for thread ffffff00f717b020: ffffff0003684cf0 ffffff0003684d60 restore_mstate+0x129(fb8568ee) -- This message posted from opensolaris.org
fyleow wrote:>> fyleow wrote: >>> I have a raidz1 tank of 5x 640 GB hard drives on my >> newly installed OpenSolaris 2009.06 system. I did a >> zpool export tank and the process has been running >> for 3 hours now taking up 100% CPU usage. >>> When I do a zfs list tank it''s still shown as >> mounted. What''s going on here? Should it really be >> taking this long? >>> $ zfs list tank >>> NAME USED AVAIL REFER MOUNTPOINT >>> tank 1.10T 1.19T 36.7K /tank >>> >>> $ zpool status tank >>> pool: tank >>> state: ONLINE >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> tank ONLINE 0 0 0 >>> raidz1 ONLINE 0 0 0 >>> c7t0d0 ONLINE 0 0 0 >>> c7t1d0 ONLINE 0 0 0 >>> c7t2d0 ONLINE 0 0 0 >>> c7t3d0 ONLINE 0 0 0 >>> c7t4d0 ONLINE 0 0 0 >>> >>> errors: No known data errors >> Can you run the following command and post the >> output: >> >> # echo "::pgrep zpool | ::walk thread | ::findstack >> -v" | mdb -k >> >> >> Thanks, >> George >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discu >> ss > > Here''s what I get > > # echo "::pgrep zpool | ::walk thread | ::findstack -v" | mdb -k > stack pointer for thread ffffff00f717b020: ffffff0003684cf0 > ffffff0003684d60 restore_mstate+0x129(fb8568ee)It might be best to generate a live crash dump so we can see what might be hanging up. You can also try running the command above multiple times and even run ''pstack <pid of zpool>'' to see if we get additional information. Thanks, George