On an up to date Solaris 10 11/06 with Sun Cluster 3.2 and iSCSI backed
did devices, zpool dumps core on creation if I try to use a did device.
Using the underlying device works, and this might not be supported
(though I don''t know), but I thought you would probably prefer to see
the error than not (this is just a test set up and therefore we don''t
have support for it).
bash-3.00# scdidadm -l
1 peon:/dev/rdsk/c0t1d0 /dev/did/rdsk/d1
2 peon:/dev/rdsk/c0t0d0 /dev/did/rdsk/d2
3 peon:/dev/rdsk/c0t2d0 /dev/did/rdsk/d3
6 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B69d0 /dev/did/rdsk/d6
7 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B6Ed0 /dev/did/rdsk/d7
8 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B88d0 /dev/did/rdsk/d8
9 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B85d0 /dev/did/rdsk/d9
10 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B83d0
/dev/did/rdsk/d10
11 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B86d0
/dev/did/rdsk/d11
12 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B87d0
/dev/did/rdsk/d12
13 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B84d0
/dev/did/rdsk/d13
bash-3.00# zpool create wibble /dev/did/dsk/d12
free(fe726420): invalid or corrupted buffer
stack trace:
libumem.so.1''?? (0xff24b460)
libCrun.so.1''__1c2k6Fpv_v_+0x4
libCstd_isa.so.1''__1cDstdMbasic_string4Ccn0ALchar_traits4Cc__n0AJallocator4Cc___2G6Mrk1_r1_+0xb8
libCstd.so.1''__1cH__rwstdNlocale_vector4nDstdMbasic_string4Ccn0BLchar_traits4Cc__n0BJallocator4Cc_____Gresize6MIn0E__p3_+0xc4
libCstd.so.1''__1cH__rwstdKlocale_imp2t5B6MII_v_+0xc4
libCstd.so.1''__1cDstdGlocaleEinit6F_v_+0x44
libCstd.so.1''__1cDstdNbasic_istream4Cwn0ALchar_traits4Cw___2t6Mn0AIios_baseJEmptyCtor__v_+0x84
libCstd.so.1''?? (0xfe57b2b8)
libCstd.so.1''?? (0xfe57b994)
libCstd.so.1''_init+0x1e0
ld.so.1''?? (0xff3bfea8)
ld.so.1''?? (0xff3cca04)
ld.so.1''_elf_rtbndr+0x10
libCrun.so.1''?? (0xfe46a93c)
libCrun.so.1''__1cH__CimplKcplus_init6F_v_+0x48
libCstd_isa.so.1''_init+0xc8
ld.so.1''?? (0xff3bfea8)
ld.so.1''?? (0xff3c5318)
ld.so.1''?? (0xff3c5474)
ld.so.1''dlopen+0x64
libmeta.so.1''sdssc_bind_library+0x88
libdiskmgt.so.1''?? (0xff2b092c)
libdiskmgt.so.1''?? (0xff2aa6b4)
libdiskmgt.so.1''?? (0xff2aa42c)
libdiskmgt.so.1''dm_get_stats+0x12c
libdiskmgt.so.1''dm_get_slice_stats+0x44
libdiskmgt.so.1''dm_inuse+0x74
zpool''check_slice+0x20
zpool''check_disk+0x144
zpool''check_device+0x4c
zpool''check_in_use+0x108
zpool''check_in_use+0x174
zpool''make_root_vdev+0x3c
zpool''?? (0x1321c)
zpool''main+0x130
zpool''_start+0x108
Abort (core dumped)
Ceri
--
That must be wonderful! I don''t understand it at all.
-- Moliere
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070119/626b5f7e/attachment.bin>
Hi Ceri, I just saw your mail today. I''m replying In case you haven''t found a solution. This is 6475304 zfs core dumps when trying to create new spool using "did" device The workaround suggests: Set environmental variable NOINUSE_CHECK=1 And the problem does not exists. Thanks, Zoram Ceri Davies wrote:> On an up to date Solaris 10 11/06 with Sun Cluster 3.2 and iSCSI backed > did devices, zpool dumps core on creation if I try to use a did device. > > Using the underlying device works, and this might not be supported > (though I don''t know), but I thought you would probably prefer to see > the error than not (this is just a test set up and therefore we don''t > have support for it). > > bash-3.00# scdidadm -l > 1 peon:/dev/rdsk/c0t1d0 /dev/did/rdsk/d1 > 2 peon:/dev/rdsk/c0t0d0 /dev/did/rdsk/d2 > 3 peon:/dev/rdsk/c0t2d0 /dev/did/rdsk/d3 > 6 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B69d0 /dev/did/rdsk/d6 > 7 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B6Ed0 /dev/did/rdsk/d7 > 8 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B88d0 /dev/did/rdsk/d8 > 9 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B85d0 /dev/did/rdsk/d9 > 10 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B83d0 /dev/did/rdsk/d10 > 11 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B86d0 /dev/did/rdsk/d11 > 12 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B87d0 /dev/did/rdsk/d12 > 13 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B84d0 /dev/did/rdsk/d13 > bash-3.00# zpool create wibble /dev/did/dsk/d12 > free(fe726420): invalid or corrupted buffer > stack trace: > libumem.so.1''?? (0xff24b460) > libCrun.so.1''__1c2k6Fpv_v_+0x4 > libCstd_isa.so.1''__1cDstdMbasic_string4Ccn0ALchar_traits4Cc__n0AJallocator4Cc___2G6Mrk1_r1_+0xb8 > libCstd.so.1''__1cH__rwstdNlocale_vector4nDstdMbasic_string4Ccn0BLchar_traits4Cc__n0BJallocator4Cc_____Gresize6MIn0E__p3_+0xc4 > libCstd.so.1''__1cH__rwstdKlocale_imp2t5B6MII_v_+0xc4 > libCstd.so.1''__1cDstdGlocaleEinit6F_v_+0x44 > libCstd.so.1''__1cDstdNbasic_istream4Cwn0ALchar_traits4Cw___2t6Mn0AIios_baseJEmptyCtor__v_+0x84 > libCstd.so.1''?? (0xfe57b2b8) > libCstd.so.1''?? (0xfe57b994) > libCstd.so.1''_init+0x1e0 > ld.so.1''?? (0xff3bfea8) > ld.so.1''?? (0xff3cca04) > ld.so.1''_elf_rtbndr+0x10 > libCrun.so.1''?? (0xfe46a93c) > libCrun.so.1''__1cH__CimplKcplus_init6F_v_+0x48 > libCstd_isa.so.1''_init+0xc8 > ld.so.1''?? (0xff3bfea8) > ld.so.1''?? (0xff3c5318) > ld.so.1''?? (0xff3c5474) > ld.so.1''dlopen+0x64 > libmeta.so.1''sdssc_bind_library+0x88 > libdiskmgt.so.1''?? (0xff2b092c) > libdiskmgt.so.1''?? (0xff2aa6b4) > libdiskmgt.so.1''?? (0xff2aa42c) > libdiskmgt.so.1''dm_get_stats+0x12c > libdiskmgt.so.1''dm_get_slice_stats+0x44 > libdiskmgt.so.1''dm_inuse+0x74 > zpool''check_slice+0x20 > zpool''check_disk+0x144 > zpool''check_device+0x4c > zpool''check_in_use+0x108 > zpool''check_in_use+0x174 > zpool''make_root_vdev+0x3c > zpool''?? (0x1321c) > zpool''main+0x130 > zpool''_start+0x108 > Abort (core dumped) > > Ceri > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Zoram Thanga::Sun Cluster Development::http://blogs.sun.com/zoram
Hello Zoram,
Tuesday, January 23, 2007, 11:27:48 AM, you wrote:
ZT> Hi Ceri,
ZT> I just saw your mail today. I''m replying In case you
haven''t found a
ZT> solution.
ZT> This is
ZT> 6475304 zfs core dumps when trying to create new spool using
"did" device
ZT> The workaround suggests:
ZT> Set environmental variable
ZT> NOINUSE_CHECK=1
ZT> And the problem does not exists.
Of course the question is why use ZFS over DID?
However it should not have core dumped.
ps. Zoram - nice to see you here :)
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
On Tue, Jan 23, 2007 at 03:57:48PM +0530, Zoram Thanga wrote:> Hi Ceri, > > I just saw your mail today. I''m replying In case you haven''t found a > solution. > > This is > > 6475304 zfs core dumps when trying to create new spool using "did" device > > The workaround suggests: > > Set environmental variable > > NOINUSE_CHECK=1 > > And the problem does not exists.Hi Zoram, that''s great, thanks. Ceri -- That must be wonderful! I don''t understand it at all. -- Moliere -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070123/a91a3a93/attachment.bin>
On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote:> Hello Zoram, > > Tuesday, January 23, 2007, 11:27:48 AM, you wrote: > > ZT> Hi Ceri, > > ZT> I just saw your mail today. I''m replying In case you haven''t found a > ZT> solution. > > ZT> This is > > ZT> 6475304 zfs core dumps when trying to create new spool using "did" device > > ZT> The workaround suggests: > > ZT> Set environmental variable > > ZT> NOINUSE_CHECK=1 > > ZT> And the problem does not exists. > > Of course the question is why use ZFS over DID?Actually the question is probably: why shouldn''t I? I can fall back to the real device name, but d8 is a lot easier to remember than c1t0100000CF1F459EE00002A0045AF6B6Ed0 and has the advantage of being guaranteed to be the same across all nodes. What''s the disadvantage?> However it should not have core dumped.Yep, that''s why I posted :) Ceri -- That must be wonderful! I don''t understand it at all. -- Moliere -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070123/3ad125f1/attachment.bin>
Hello Ceri, Tuesday, January 23, 2007, 1:48:50 PM, you wrote: CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote:>> Hello Zoram, >> >> Tuesday, January 23, 2007, 11:27:48 AM, you wrote: >> >> ZT> Hi Ceri, >> >> ZT> I just saw your mail today. I''m replying In case you haven''t found a >> ZT> solution. >> >> ZT> This is >> >> ZT> 6475304 zfs core dumps when trying to create new spool using "did" device >> >> ZT> The workaround suggests: >> >> ZT> Set environmental variable >> >> ZT> NOINUSE_CHECK=1 >> >> ZT> And the problem does not exists. >> >> Of course the question is why use ZFS over DID?CD> Actually the question is probably: why shouldn''t I? I can fall back CD> to the real device name, but d8 is a lot easier to remember than CD> c1t0100000CF1F459EE00002A0045AF6B6Ed0 and has the advantage of being CD> guaranteed to be the same across all nodes. CD> What''s the disadvantage? Another layer? Less performance? Ok, I''m only guessing. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Hi Robert, On Tue, Jan 23, 2007 at 02:42:33PM +0100, Robert Milkowski wrote:> Tuesday, January 23, 2007, 1:48:50 PM, you wrote: > CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote: > > >> Of course the question is why use ZFS over DID? > > CD> Actually the question is probably: why shouldn''t I? I can fall back > CD> to the real device name, but d8 is a lot easier to remember than > CD> c1t0100000CF1F459EE00002A0045AF6B6Ed0 and has the advantage of being > CD> guaranteed to be the same across all nodes. > > CD> What''s the disadvantage? > > Another layer? Less performance? Ok, I''m only guessing.OK, as long as I didn''t miss anything, thanks! Ceri -- That must be wonderful! I don''t understand it at all. -- Moliere -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070123/aec9a981/attachment.bin>