Hi, experts, I install Solaris 10 06/06 x86 on vmware 5.5, and admin zfs by command line and web, all is good. Web admin is more convenient, I needn''t type commands. But after my computer lost power , and restarted, I get a problem on zfs web admin (https://hostname:6789/zfs). The problem is, when I try to create a new storage pool from web , it always shows "No items found" , but in fact there are 10 harddisks available. I still can use zpool/zfs command line to create new pool, file system, volumes. the command way works quickly and correctly. I have tried to restart the service (smcwebserver), no use. Anyone have the experience on it, is it a bug? Regards, Bill This message posted from opensolaris.org
Bill wrote:> Hi, experts, > > I install Solaris 10 06/06 x86 on vmware 5.5, and admin zfs by > command line and web, all is good. Web admin is more convenient, I > needn''t type commands. But after my computer lost power , and > restarted, I get a problem on zfs web admin > (https://hostname:6789/zfs). > > The problem is, when I try to create a new storage pool from web , > it always shows "No items found" , but in fact there are 10 > harddisks available. > > I still can use zpool/zfs command line to create new pool, file > system, volumes. the command way works quickly and correctly. > > I have tried to restart the service (smcwebserver), no use. > > Anyone have the experience on it, is it a bug?ZFS may believe the disks are in use. What is the output of (as root) "/usr/lib/zfs/availdevs -d"? Thanks, Steve -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 185 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060824/48187607/attachment.bin>
When I run the command, it prompts: # /usr/lib/zfs/availdevs -d Segmentation Fault - core dumped. This message posted from opensolaris.org
# /usr/lib/zfs/availdevs -d Segmentation Fault - core dumped. # pstack core core ''core'' of 2350: ./availdevs -d ----------------- lwp# 1 / thread# 1 -------------------- d2d64b3c strlen (0) + c d2fa2f82 get_device_name (8063400, 0, 804751c, 1c) + 3e d2fa3015 get_disk (8063400, 0, 804751c, 8067430) + 4d d2fa3bbf dmgt_avail_disk_iter (8050ddb, 8047554) + a1 08051305 main (2, 8047584, 8047590) + 110 08050ce6 ???????? (2, 80476b0, 80476bc, 0, 80476bf, 80476f9) ----------------- lwp# 2 / thread# 2 -------------------- d2de1a81 _door_return (0, 0, 0, 0) + 31 d29f0d3d door_create_func (0) + 29 d2ddf93e _thr_setup (d2992400) + 4e d2ddfc20 _lwp_start (d2992400, 0, 0, d2969ff8, d2ddfc20, d2992400) ----------------- lwp# 3 / thread# 3 -------------------- d2ddfc99 __lwp_park (809afc0, 809afd0, 0) + 19 d2dda501 cond_wait_queue (809afc0, 809afd0, 0, 0) + 3b d2dda9fa _cond_wait (809afc0, 809afd0) + 66 d2ddaa3c cond_wait (809afc0, 809afd0) + 21 d2a92bc8 subscriber_event_handler (80630c0) + 3f d2ddf93e _thr_setup (d2750000) + 4e d2ddfc20 _lwp_start (d2750000, 0, 0, d2865ff8, d2ddfc20, d2750000) ----------------- lwp# 4 / thread# 4 -------------------- d2de0cd5 __pollsys (d274df78, 1, 0, 0) + 15 d2d8a6d2 poll (d274df78, 1, ffffffff) + 52 d2d0ee1e watch_mnttab (0) + af d2ddf93e _thr_setup (d2750400) + 4e d2ddfc20 _lwp_start (d2750400, 0, 0, d274dff8, d2ddfc20, d2750400) ----------------- lwp# 5 / thread# 5 -------------------- d2ddfc99 __lwp_park (8064ef0, 8064f00, 0) + 19 d2dda501 cond_wait_queue (8064ef0, 8064f00, 0, 0) + 3b d2dda9fa _cond_wait (8064ef0, 8064f00) + 66 d2ddaa3c cond_wait (8064ef0, 8064f00) + 21 d2a92bc8 subscriber_event_handler (8064be0) + 3f d2ddf93e _thr_setup (d2750800) + 4e d2ddfc20 _lwp_start (d2750800, 0, 0, d24edff8, d2ddfc20, d2750800) This message posted from opensolaris.org