Hi, I just built a kernel from source yesterday and try to run the new kernel and now it''s giving me "no dataset available" error. My non-root user home directory is mounted on a ZFS filesystem and now I can''t get to it. zpool status shows that my pool needs to be upgraded so I did, but after the upgrade, I still get "no dataset available". I was running Sol Express B62 before and I got the source from mercurial yesterday. The B62 was live-upgraded partition and it''s the 2nd (and only active) slice. Please HELP! This message posted from opensolaris.org
zpool import pool (my pool is named ''pool'') returns cannot import ''pool'': no such pool available zpool list returns that pool named ''pool'' is available with "Unknown" Health. All I did was "zpool upgrade" because that''s what it asked me to do...nothing more. Anyone have any idea? This message posted from opensolaris.org
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:> zpool import pool (my pool is named ''pool'') returns > > cannot import ''pool'': no such pool availableWhat does ''zpool import'' by itself show you? It should give you a list of available pools to import. Regards, markm
no pools available to import yet when I do zpool list it shows my pool with health UNKNOWN This message posted from opensolaris.org
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:> zpool list > > it shows my pool with health UNKNOWNThat means it''s already imported. What''s the output of ''zpool status''? Regards, markm
pool: pool state: UNKNOWN scrub: none requested config: NAME STATE READ WRITE CKSUM pool UNKNOWN 0 0 0 c0d0s5 UNKNOWN 0 0 0 c0d0s6 UNKNOWN 0 0 0 c0d0s4 UNKNOWN 0 0 0 c0d0s3 UNKNOWN 0 0 0 errors: No known data errors This message posted from opensolaris.org
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:> NAME STATE READ WRITE CKSUM > pool UNKNOWN 0 0 0 > c0d0s5 UNKNOWN 0 0 0 > c0d0s6 UNKNOWN 0 0 0 > c0d0s4 UNKNOWN 0 0 0 > c0d0s3 UNKNOWN 0 0 0OK, so you''ve striped across slices of a single disk. But the disk appears unavailable. Does ''format'' see it? Are there any error messages in the console or the system log? Does ''svcs'' show anything offline? The reason you''re getting "no datasets available" is that the disk cannot be accessed. Once you figure out how to make the disk accessible again, your pool will come back, and your datasets will be restored. Regards, markm
AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 6334 alt 2 hd 255 sec 63> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 Specify disk (enter its number): 0 selecting c0d0 Controller working list found [disk formatted, defect list found] Warning: Current Disk has mounted partitions. /dev/dsk/c0d0s0 is in use for live upgrade /. Please see ludelete(1M). /dev/dsk/c0d0s1 is currently used by swap. Please see swap(1M). /dev/dsk/c0d0s3 is part of active ZFS pool pool. Please see zpool(1M). /dev/dsk/c0d0s4 is part of active ZFS pool pool. Please see zpool(1M). /dev/dsk/c0d0s5 is part of active ZFS pool pool. Please see zpool(1M). /dev/dsk/c0d0s6 is part of active ZFS pool pool. Please see zpool(1M). /dev/dsk/c0d0s7 is currently mounted on /. Please see umount(1M). The disk is available...I''m using it. This message posted from opensolaris.org
Sean McGrath - Sun Microsystems Ireland
2007-Jul-13 15:38 UTC
[zfs-discuss] zfs "no dataset available"
Kwang-Hyun Baek stated: < AVAILABLE DISK SELECTIONS: < 0. c0d0 <DEFAULT cyl 6334 alt 2 hd 255 sec 63> < /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 < Specify disk (enter its number): 0 < selecting c0d0 < Controller working list found < [disk formatted, defect list found] < Warning: Current Disk has mounted partitions. < /dev/dsk/c0d0s0 is in use for live upgrade /. Please see ludelete(1M). < /dev/dsk/c0d0s1 is currently used by swap. Please see swap(1M). < /dev/dsk/c0d0s3 is part of active ZFS pool pool. Please see zpool(1M). < /dev/dsk/c0d0s4 is part of active ZFS pool pool. Please see zpool(1M). < /dev/dsk/c0d0s5 is part of active ZFS pool pool. Please see zpool(1M). < /dev/dsk/c0d0s6 is part of active ZFS pool pool. Please see zpool(1M). < /dev/dsk/c0d0s7 is currently mounted on /. Please see umount(1M). < < < The disk is available...I''m using it. what does: prtvtoc /dev/rdsk/c0d0s0 say ? Just being sure you don''t have any over lapping partitions. and possibly (Mark would probably ask this next :): export and import the pool again.. zpool export <pool> zpool import <pool> Regards, < < < This message posted from opensolaris.org < _______________________________________________ < zfs-discuss mailing list < zfs-discuss at opensolaris.org < http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Sean. .
root at solaris-devx:/# prtvtoc /dev/rdsk/c0d0s0 * /dev/rdsk/c0d0s0 partition map * * Dimensions: * 512 bytes/sector * 63 sectors/track * 255 tracks/cylinder * 16065 sectors/cylinder * 6336 cylinders * 6334 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 4257225 16065 4273289 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 0 00 43471890 24370605 67842494 1 3 01 48195 4209030 4257224 2 5 00 0 101755710 101755709 3 0 00 4273290 8434125 12707414 4 0 00 22956885 10249470 33206354 5 0 00 33206355 10265535 43471889 6 0 00 12707415 10249470 22956884 7 2 00 67842495 33913215 101755709 / 8 1 01 0 16065 16064 9 9 01 16065 32130 48194 I will try that. (I need to reboot) This message posted from opensolaris.org
Okay. Now it says the pool cannot be imported. :*( Is there anything I can to fix it? # zpool import pool: pool id: 3508905099046791975 state: UNKNOWN action: The pool cannot be imported due to damaged devices or data. config: pool UNKNOWN c0d0s5 UNKNOWN c0d0s6 UNKNOWN c0d0s4 UNKNOWN c0d0s3 UNKNOWN This message posted from opensolaris.org
Is there any way to fix this? I actually tried to destroy the pool and try to create a new one, but it doesn''t let me. Whenever I try, I get the following error: root at solaris-devx:/var/crash# zpool create -f pool c0d0s5 internal error: No such process Abort (core dumped) After that zpool list shows pool... and if I try to remove it, I get the similar core dump... root at solaris-devx:/var/crash# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 4.88G 74.5K 4.87G 0% UNKNOWN - root at solaris-devx:/var/crash# zpool destroy pool internal error: No such process Abort (core dumped) ----PLEASE HELP!!!! This message posted from opensolaris.org
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote:> Is there any way to fix this? I actually tried to destroy the pool and > try to create a new one, but it doesn''t let me. Whenever I try, I get > the following error: > > root at solaris-devx:/var/crash# zpool create -f pool c0d0s5 > internal error: No such process > Abort (core dumped)What does ''uname -a'' show? Regards, markm
# uname -a SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc ======================================================What''s more interesting is that ZFS version shows that it''s 8....does it even exist? root at solaris-devx:/# zpool upgrade This system is currently running ZFS version 6. The following pools are formatted using a newer software version and cannot be accessed on the current system. VER POOL --- ------------ 8 pool Use ''zpool upgrade -v'' for a list of available versions and their associated features. This message posted from opensolaris.org
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote:> # uname -a > SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc > > ====================================================== > What''s more interesting is that ZFS version shows that it''s 8....does it > even exist?Yes, 8 was created to support delegated administration, and is in opensolaris. The full list is available in the source: http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_main.c#3365> root at solaris-devx:/# zpool upgrade > This system is currently running ZFS version 6. > > The following pools are formatted using a newer software version and > cannot be accessed on the current system.So what happened was that you created version 8 pool and then did something (reinstalled? fell back to a previous LU root?) to get you back to version 6 software. Once you upgrade to version 8, you will regain full access to your pool. Regards, markm