My system, a dual Opteron running Solaris snv_27, paniced this afternoon with what seems to be a ZFS problem. Here''s the text from the log: Jan 7 15:00:46 hermione unix: [ID 836849 kern.notice] Jan 7 15:00:46 hermione ^Mpanic[cpu1]/thread=fffffe800079bc80: Jan 7 15:00:46 hermione genunix: [ID 809409 kern.notice] ZFS: I/O failure (read on raidz off 36050a3800: zio ffffffff83f69380 [L1 ZFS plain file] vdev=0 offset=36050a3800 size=4000L/1400P/1c00A fletcher4 lzjb LE contiguous birth=385 fill=95 cksum=1fd430d19 Was this indeed a ZFS-related panic, or something else? Is this a known problem? This message posted from opensolaris.org
Jerry Gardner wrote:> My system, a dual Opteron running Solaris snv_27, paniced this afternoon with what seems to be a ZFS problem. > > Here''s the text from the log: > > Jan 7 15:00:46 hermione unix: [ID 836849 kern.notice] > Jan 7 15:00:46 hermione ^Mpanic[cpu1]/thread=fffffe800079bc80: > Jan 7 15:00:46 hermione genunix: [ID 809409 kern.notice] ZFS: I/O failure (read on raidz off 36050a3800: zio ffffffff83f69380 [L1 ZFS plain file] vdev=0 offset=36050a3800 size=4000L/1400P/1c00A fletcher4 lzjb LE contiguous birth=385 fill=95 cksum=1fd430d19 > > Was this indeed a ZFS-related panic, or something else? Is this a known problem?Hi Jerry, a quick search of internal sunsolve doesn''t reveal much that could be similar. Could you please make the compressed crash dump available somewhere and let me know where it is -- I''ll upload it and pass the initial analysis on. While we''re waiting for that, could you please fire up ''mdb -k'' on that dump, and run the following commands: ::status zpool_version/S *panic_thread::findstack -v thanks and regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
I''ve got a repeatable ZFS panic too, but the steps to reproduce it are pretty ridiculous, and I''m on an old build (holding out for the s10u1 beta packages), so I haven''t submitted it yet - I figured I may as well piggyback here, in case you''re interested anyway. Running build 28 on a home-built amd64 (MSI Neo4). ~1TB RAID-Z array on 4 7200.8 disks (see http://opensolaris.org/jive/thread.jspa?threadID=4808&tstart=15 for the full configuration details). I''ve got about 500GB of data in a few million files on the array. I have a Windows XP PC that has the ZFS filesystem mounted via NFS (using SFU on the WIndows client). I used TweakGDS to allow Google Desktop Search to crawl the drive, and the result was repeated ZFS panics. Copernic crawled the entire thing without a hitch, by the way. I''ll put a core on supportfiles or help out any other way I can, if you''d like. # mdb -k 4 Loading modules: [ unix krtld genunix specfs dtrace pcplusmp ufs ip sctp usba fctl nca lofs zfs random nfs audiosup sppp crypto ptm ]> ::statusdebugging crash dump vmcore.4 (64-bit) from polaris operating system: 5.11 snv_28 (i86pc) panic message: BAD TRAP: type=e (#pf Page fault) rp=fffffe80002de170 addr=8 occurred in module "zfs" due to a NULL pointer dereference dump content: kernel pages only> *panic_thread::findstack -vstack pointer for thread ffffffff85a60400: fffffe80002ddee0 fffffe80002ddfd0 panic+0x9e() fffffe80002de060 die+0xeb(e, fffffe80002de170, 8, 0) fffffe80002de160 trap+0x1458(fffffe80002de170, 8, 0) fffffe80002de170 _cmntrap+0x140() fffffe80002de2b0 zap_get_leaf_byblk+0x3f(ffffffff97bfca10, 100000010000000, 0, 1) fffffe80002de300 zap_deref_leaf+0x60(ffffffff97bfca10, 3e948c1000000000, 0, 1) fffffe80002de3a0 fzap_lookup+0x6d(ffffffff97bfca10, ffffffff83d89ae0, 8, 1, fffffe80002de468) fffffe80002de420 zap_lookup+0xbe(ffffffff8433b610, 27c9c, ffffffff83d89ae0, 8, 1, fffffe80002de468) fffffe80002de4c0 zfs_dirent_lock+0x20d(fffffe80002de508, ffffffffb305d4a0, ffffffff83d89ae0, fffffe80002de500, 6) fffffe80002de530 zfs_dirlook+0xb3(ffffffffb305d4a0, ffffffff83d89ae0, fffffe80002de678) fffffe80002de5b0 zfs_lookup+0x138(ffffffffb305c3c0, ffffffff83d89ae0, fffffe80002de678, 0, 0, 0, ffffffff855b7608) fffffe80002de620 fop_lookup+0x50(ffffffffb305c3c0, ffffffff83d89ae0, fffffe80002de678, 0, 0, 0, ffffffff855b7608) fffffe80002de7d0 rfs3_lookup+0x171(fffffe80002de820, fffffe80002de8e0, ffffffff83826340, fffffe80002deb38, ffffffff855b7608) fffffe80002dea90 common_dispatch+0x332(fffffe80002deb38, ffffffff85494700, 2, 4, ffffffffc01a6f90, ffffffffc01a53d0) fffffe80002deab0 rfs_dispatch+0x2f(fffffe80002deb38, ffffffff85494700) fffffe80002deb90 svc_getreq+0x155(ffffffff85494700, ffffffff8b454280) fffffe80002debf0 svc_run+0x183(ffffffff84492e00) fffffe80002dec30 svc_do_run+0x95(1) fffffe80002deec0 nfssys+0x6f6(e, fec70fc8) fffffe80002def10 sys_syscall32+0x101()>This message posted from opensolaris.org
Hi Ben. This looks rather intriguing. Could you send me the output of "zdb -bbc"? Also, what''s the easiest way for me to get ahold of your core file? Thanks for taking the time to let us know about this. --Bill On Mon, Jan 09, 2006 at 09:58:19PM -0800, Ben Lazarus wrote:> I''ve got a repeatable ZFS panic too, but the steps to reproduce it are > pretty ridiculous, and I''m on an old build (holding out for the s10u1 > beta packages), so I haven''t submitted it yet - I figured I may as > well piggyback here, in case you''re interested anyway. > > Running build 28 on a home-built amd64 (MSI Neo4). ~1TB RAID-Z array > on 4 7200.8 disks (see > http://opensolaris.org/jive/thread.jspa?threadID=4808&tstart=15 for > the full configuration details). I''ve got about 500GB of data in a > few million files on the array. > > I have a Windows XP PC that has the ZFS filesystem mounted via NFS > (using SFU on the WIndows client). I used TweakGDS to allow Google > Desktop Search to crawl the drive, and the result was repeated ZFS > panics. Copernic crawled the entire thing without a hitch, by the > way. > > I''ll put a core on supportfiles or help out any other way I can, if you''d like. > > # mdb -k 4 > Loading modules: [ unix krtld genunix specfs dtrace pcplusmp ufs ip sctp usba fctl nca lofs zfs random nfs audiosup sppp crypto ptm ] > > ::status > debugging crash dump vmcore.4 (64-bit) from polaris > operating system: 5.11 snv_28 (i86pc) > panic message: BAD TRAP: type=e (#pf Page fault) rp=fffffe80002de170 addr=8 occurred in module "zfs" due to a NULL pointer dereference > dump content: kernel pages only > > *panic_thread::findstack -v > stack pointer for thread ffffffff85a60400: fffffe80002ddee0 > fffffe80002ddfd0 panic+0x9e() > fffffe80002de060 die+0xeb(e, fffffe80002de170, 8, 0) > fffffe80002de160 trap+0x1458(fffffe80002de170, 8, 0) > fffffe80002de170 _cmntrap+0x140() > fffffe80002de2b0 zap_get_leaf_byblk+0x3f(ffffffff97bfca10, 100000010000000, 0, 1) > fffffe80002de300 zap_deref_leaf+0x60(ffffffff97bfca10, 3e948c1000000000, 0, 1) > fffffe80002de3a0 fzap_lookup+0x6d(ffffffff97bfca10, ffffffff83d89ae0, 8, 1, fffffe80002de468) > fffffe80002de420 zap_lookup+0xbe(ffffffff8433b610, 27c9c, ffffffff83d89ae0, 8, 1, fffffe80002de468) > fffffe80002de4c0 zfs_dirent_lock+0x20d(fffffe80002de508, ffffffffb305d4a0, ffffffff83d89ae0, fffffe80002de500, 6) > fffffe80002de530 zfs_dirlook+0xb3(ffffffffb305d4a0, ffffffff83d89ae0, fffffe80002de678) > fffffe80002de5b0 zfs_lookup+0x138(ffffffffb305c3c0, ffffffff83d89ae0, fffffe80002de678, 0, 0, 0, ffffffff855b7608) > fffffe80002de620 fop_lookup+0x50(ffffffffb305c3c0, ffffffff83d89ae0, fffffe80002de678, 0, 0, 0, ffffffff855b7608) > fffffe80002de7d0 rfs3_lookup+0x171(fffffe80002de820, fffffe80002de8e0, ffffffff83826340, fffffe80002deb38, ffffffff855b7608) > fffffe80002dea90 common_dispatch+0x332(fffffe80002deb38, ffffffff85494700, 2, 4, ffffffffc01a6f90, ffffffffc01a53d0) > fffffe80002deab0 rfs_dispatch+0x2f(fffffe80002deb38, ffffffff85494700) > fffffe80002deb90 svc_getreq+0x155(ffffffff85494700, ffffffff8b454280) > fffffe80002debf0 svc_run+0x183(ffffffff84492e00) > fffffe80002dec30 svc_do_run+0x95(1) > fffffe80002deec0 nfssys+0x6f6(e, fec70fc8) > fffffe80002def10 sys_syscall32+0x101() > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bill, Well, I''d love to, but zdb itself cored :) # zdb -bbc huge Traversing all blocks to verify checksums and verify nothing leaked ... error: ZFS: bad checksum (read on raidz off 9a38c36800: zio 7a30c0 [L0 DMU dnode] vdev=0 offset=9a38c36800 size=4000L/e00P/1400A fletcher4 lzjb LE contiguous birth=203412 fill=32 cksum=f9f44f373b:1b3c7e967cf34:1f62090ed26acf0:b334790340e66b95): error 50 zsh: IOT instruction (core dumped) zdb -bbc huge # mdb core Loading modules: [ libumem.so.1 libzpool.so.1 libavl.so.1 libnvpair.so.1 libc.so.1 ld.so.1 ]> ::statusdebugging core file of zdb (64-bit) from polaris file: /usr/sbin/amd64/zdb initial argv: zdb -bbc huge threading model: native threads status: process terminated by SIGABRT (Abort)> ::stacklibc.so.1`_lwp_kill+0xa() libc.so.1`raise+0x20() libc.so.1`abort+0xed() libzpool.so.1`vpanic+0x5b() libzpool.so.1`panic+0x9a() libzpool.so.1`zio_done+0x458() libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`zio_wait_for_children+0x97() libzpool.so.1`zio_wait_children_done+0x1a() libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`zio_vdev_io_assess+0x217() libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`vdev_raidz_io_done+0x534() libzpool.so.1`vdev_io_done+0x18() libzpool.so.1`zio_vdev_io_done+0xb() libzpool.so.1`taskq_thread+0x99() libc.so.1`_thr_setup+0x70() libc.so.1`_lwp_start() I''m sending you one of the crash dumps, and the zdb core - they''re en route to supportfiles, in cores/lazarus.tar.bz2 I estimate the upload will take ~5 hours. By the way, is there a SUNWscat package I can run on these Express builds? On 1/10/06, Bill Moore <Bill.Moore at sun.com> wrote:> > Hi Ben. This looks rather intriguing. Could you send me the output of > "zdb -bbc"? Also, what''s the easiest way for me to get ahold of your > core file? > > Thanks for taking the time to let us know about this. > > > --Bill > > > > On Mon, Jan 09, 2006 at 09:58:19PM -0800, Ben Lazarus wrote: > > I''ve got a repeatable ZFS panic too, but the steps to reproduce it are > > pretty ridiculous, and I''m on an old build (holding out for the s10u1 > > beta packages), so I haven''t submitted it yet - I figured I may as > > well piggyback here, in case you''re interested anyway. > > > > Running build 28 on a home-built amd64 (MSI Neo4). ~1TB RAID-Z array > > on 4 7200.8 disks (see > > http://opensolaris.org/jive/thread.jspa?threadID=4808&tstart=15 for > > the full configuration details). I''ve got about 500GB of data in a > > few million files on the array. > > > > I have a Windows XP PC that has the ZFS filesystem mounted via NFS > > (using SFU on the WIndows client). I used TweakGDS to allow Google > > Desktop Search to crawl the drive, and the result was repeated ZFS > > panics. Copernic crawled the entire thing without a hitch, by the > > way. > > > > I''ll put a core on supportfiles or help out any other way I can, if > you''d like. > > > > # mdb -k 4 > > Loading modules: [ unix krtld genunix specfs dtrace pcplusmp ufs ip sctp > usba fctl nca lofs zfs random nfs audiosup sppp crypto ptm ] > > > ::status > > debugging crash dump vmcore.4 (64-bit) from polaris > > operating system: 5.11 snv_28 (i86pc) > > panic message: BAD TRAP: type=e (#pf Page fault) rp=fffffe80002de170 > addr=8 occurred in module "zfs" due to a NULL pointer dereference > > dump content: kernel pages only > > > *panic_thread::findstack -v > > stack pointer for thread ffffffff85a60400: fffffe80002ddee0 > > fffffe80002ddfd0 panic+0x9e() > > fffffe80002de060 die+0xeb(e, fffffe80002de170, 8, 0) > > fffffe80002de160 trap+0x1458(fffffe80002de170, 8, 0) > > fffffe80002de170 _cmntrap+0x140() > > fffffe80002de2b0 zap_get_leaf_byblk+0x3f(ffffffff97bfca10, > 100000010000000, 0, 1) > > fffffe80002de300 zap_deref_leaf+0x60(ffffffff97bfca10, > 3e948c1000000000, 0, 1) > > fffffe80002de3a0 fzap_lookup+0x6d(ffffffff97bfca10, ffffffff83d89ae0, > 8, 1, fffffe80002de468) > > fffffe80002de420 zap_lookup+0xbe(ffffffff8433b610, 27c9c, > ffffffff83d89ae0, 8, 1, fffffe80002de468) > > fffffe80002de4c0 zfs_dirent_lock+0x20d(fffffe80002de508, > ffffffffb305d4a0, ffffffff83d89ae0, fffffe80002de500, 6) > > fffffe80002de530 zfs_dirlook+0xb3(ffffffffb305d4a0, ffffffff83d89ae0, > fffffe80002de678) > > fffffe80002de5b0 zfs_lookup+0x138(ffffffffb305c3c0, ffffffff83d89ae0, > fffffe80002de678, 0, 0, 0, ffffffff855b7608) > > fffffe80002de620 fop_lookup+0x50(ffffffffb305c3c0, ffffffff83d89ae0, > fffffe80002de678, 0, 0, 0, ffffffff855b7608) > > fffffe80002de7d0 rfs3_lookup+0x171(fffffe80002de820, fffffe80002de8e0, > ffffffff83826340, fffffe80002deb38, ffffffff855b7608) > > fffffe80002dea90 common_dispatch+0x332(fffffe80002deb38, > ffffffff85494700, 2, 4, ffffffffc01a6f90, ffffffffc01a53d0) > > fffffe80002deab0 rfs_dispatch+0x2f(fffffe80002deb38, ffffffff85494700) > > fffffe80002deb90 svc_getreq+0x155(ffffffff85494700, ffffffff8b454280) > > fffffe80002debf0 svc_run+0x183(ffffffff84492e00) > > fffffe80002dec30 svc_do_run+0x95(1) > > fffffe80002deec0 nfssys+0x6f6(e, fec70fc8) > > fffffe80002def10 sys_syscall32+0x101() > > > > > This message posted from opensolaris.org > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060110/fc663849/attachment.html>
Hi Ben, Ben Lazarus wrote:> I''ve got a repeatable ZFS panic too, but the steps to reproduce it > are pretty ridiculous, and I''m on an old build (holding out for the > s10u1 beta packages), so I haven''t submitted it yet - I figured I may > as well piggyback here, in case you''re interested anyway.definitely! in fact, the more the merrier since that helps us (ie, team zfs + hangers on like myself) to make zfs better.> Running build 28 on a home-built amd64 (MSI Neo4). ~1TB RAID-Z array > on 4 7200.8 disks (see > http://opensolaris.org/jive/thread.jspa?threadID=4808&tstart=15 for > the full configuration details). I''ve got about 500GB of data in a > few million files on the array.hmmm, I''m jealous of your config.... must.... top.... it!> I have a Windows XP PC that has the ZFS filesystem mounted via NFS > (using SFU on the WIndows client). I used TweakGDS to allow Google > Desktop Search to crawl the drive, and the result was repeated ZFS > panics. Copernic crawled the entire thing without a hitch, by the > way.This is interesting -- there are quite clearly ''is'' and ''is not'' use cases here. Can you reproduce the panic if you don''t use TweakGDS? (not knowing anything about TweakGDS) - it would help to be able to> I''ll put a core on supportfiles or help out any other way I can, if > you''d like. > > # mdb -k 4 Loading modules: [ unix krtld genunix specfs dtrace > pcplusmp ufs ip sctp usba fctl nca lofs zfs random nfs audiosup sppp > crypto ptm ] >> ::status > debugging crash dump vmcore.4 (64-bit) from polaris operating system: > 5.11 snv_28 (i86pc) panic message: BAD TRAP: type=e (#pf Page fault) > rp=fffffe80002de170 addr=8 occurred in module "zfs" due to a NULL > pointer dereference dump content: kernel pages only >> *panic_thread::findstack -v > stack pointer for thread ffffffff85a60400: fffffe80002ddee0 > fffffe80002ddfd0 panic+0x9e() fffffe80002de060 die+0xeb(e, > fffffe80002de170, 8, 0) fffffe80002de160 > trap+0x1458(fffffe80002de170, 8, 0) fffffe80002de170 _cmntrap+0x140() > fffffe80002de2b0 zap_get_leaf_byblk+0x3f(ffffffff97bfca10, > 100000010000000, 0, 1) fffffe80002de300 > zap_deref_leaf+0x60(ffffffff97bfca10, 3e948c1000000000, 0, 1) > fffffe80002de3a0 fzap_lookup+0x6d(ffffffff97bfca10, ffffffff83d89ae0, > 8, 1, fffffe80002de468)This looks quite interesting. If you could make the compressed dump available then we could have a better idea of what''s going on. thanks and regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
Looks like the email I sent this morning never made it to the list. It''s attached below. Since then, I came home from work and found that the FTP to supportfiles had died (for unrelated reasons). I''ve restarted it (cores/lazarus-RETRY.tar.bz2), and it''ll be done in 4-5 hours or so. To answer James'' question publically (I already answered in private), yes, you can technically repeat the panic without using TweakGDS, but in another sense you can''t. All TweakGDS does is modify a Windows registry setting to ''uncripple'' Google Desktop Search into being able to crawl network drives. Without changing that registry value, I couldn''t run GDS against the NFS-mounted ZFS filesystem to begin with, so obviously, I couldn''t then reproduce the panic, but I could theoretically change the registry values manually using regedit instead of TweakGDS. Email I sent earlier follows: Hi Bill, Well, I''d love to, but zdb itself cored :) # zdb -bbc huge Traversing all blocks to verify checksums and verify nothing leaked ... error: ZFS: bad checksum (read on raidz off 9a38c36800: zio 7a30c0 [L0 DMU dnode] vdev=0 offset=9a38c36800 size=4000L/e00P/1400A fletcher4 lzjb LE contiguous birth=203412 fill=32 cksum=f9f44f373b:1b3c7e967cf34:1f62090ed26acf0:b334790340e66b95): error 50 zsh: IOT instruction (core dumped) zdb -bbc huge # mdb core Loading modules: [ libumem.so.1 libzpool.so.1 libavl.so.1 libnvpair.so.1 libc.so.1 ld.so.1 ]> ::statusdebugging core file of zdb (64-bit) from polaris file: /usr/sbin/amd64/zdb initial argv: zdb -bbc huge threading model: native threads status: process terminated by SIGABRT (Abort)> ::stacklibc.so.1`_lwp_kill+0xa() libc.so.1`raise+0x20() libc.so.1`abort+0xed () libzpool.so.1`vpanic+0x5b() libzpool.so.1`panic+0x9a() libzpool.so.1`zio_done+0x458() libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`zio_wait_for_children+0x97() libzpool.so.1`zio_wait_children_done+0x1a () libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`zio_vdev_io_assess+0x217() libzpool.so.1`zio_next_stage+0x180() libzpool.so.1`vdev_raidz_io_done+0x534() libzpool.so.1`vdev_io_done+0x18() libzpool.so.1`zio_vdev_io_done+0xb () libzpool.so.1`taskq_thread+0x99() libc.so.1`_thr_setup+0x70() libc.so.1`_lwp_start() I''m sending you one of the crash dumps, and the zdb core - they''re en route to supportfiles, in cores/lazarus.tar.bz2 I estimate the upload will take ~5 hours. By the way, is there a SUNWscat package I can run on these Express builds? This message posted from opensolaris.org
For Sun-internal folks, Ben''s core archive is now on zion, in /cores/jmcp/ben_lazarus_zfs we appear to be dying @ usr/src/uts/common/fs/zfs/zap.c line 521 513 static zap_leaf_t * 514 zap_get_leaf_byblk(zap_t *zap, uint64_t blkid, dmu_tx_t *tx, krw_t lt) 515 { 516 zap_leaf_t *l, *nl; 517 518 l = zap_get_leaf_byblk_impl(zap, blkid, tx, lt); 519 520 nl = l; 521 while (nl->lh_next != 0) { ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 522 zap_leaf_t *nnl; 523 nnl = zap_get_leaf_byblk_impl(zap, nl->lh_next, tx, lt); 524 nl->l_next = nnl; 525 nl = nnl; 526 } 527 528 return (l); 529 } cheers, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
I tried another zdb run - this time it completed: # zdb -bbc huge Traversing all blocks to verify checksums and verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 4711311 bp logical: 493140981248 avg: 104671 bp physical: 491000182784 avg: 104217 compression: 1.00 bp allocated: 656302158848 avg: 139303 compression: 0.75 SPA allocated: 656302158848 used: 57.02% Blocks LSIZE PSIZE ASIZE avg comp %Total Type 3 36.0K 3.50K 6.00K 2K 10.29 0.00 deferred free 1 512 512 1K 1K 1.00 0.00 object directory 2 1K 1K 2K 1K 1.00 0.00 object array 1 16K 1.50K 2K 2K 10.67 0.00 packed nvlist - - - - - - - packed nvlist size 1 16K 16K 22.0K 22.0K 1.00 0.00 bplist - - - - - - - bplist header - - - - - - - SPA space map header 4.72K 20.1M 13.3M 18.8M 3.98K 1.52 0.00 SPA space map - - - - - - - ZIL intent log 32.9K 526M 134M 190M 5.79K 3.91 0.03 DMU dnode 2 2K 1K 2K 1K 2.00 0.00 DMU objset - - - - - - - DSL directory 2 1K 1K 2K 1K 1.00 0.00 DSL directory child map 1 512 512 1K 1K 1.00 0.00 DSL dataset snap map 3 257K 23.0K 32K 10.7K 11.15 0.00 DSL props - - - - - - - DSL dataset - - - - - - - ZFS znode - - - - - - - ZFS ACL 4.40M 458G 457G 611G 139K 1.00 99.95 ZFS plain file 59.0K 361M 61.7M 104M 1.75K 5.86 0.02 ZFS directory 1 512 512 1K 1K 1.00 0.00 ZFS master node 1 512 512 1K 1K 1.00 0.00 ZFS delete queue - - - - - - - zvol object - - - - - - - zvol prop - - - - - - - other uint8[] - - - - - - - other uint64[] - - - - - - - other ZAP 4.49M 459G 457G 611G 136K 1.00 100.00 Total P.S. the forums desperately need a fixed-width font option. This message posted from opensolaris.org
Nicolas Williams
2006-Jan-11 15:23 UTC
Archive/font Re: [zfs-discuss] Re: Re: Re: ZFS Panic?
On Wed, Jan 11, 2006 at 06:29:57AM -0800, Ben Lazarus wrote:> P.S. the forums desperately need a fixed-width font option. > This message posted from opensolaris.orgThe mailman archive is much nicer, IMO, than the one off the discussions page. Try it; follow the link in item (3) of the how to subscribe instructions: http://mail.opensolaris.org/mailman/listinfo
> I''ve got a repeatable ZFS panic too, but the steps to > reproduce it are pretty ridiculous, and I''m on an old > build (holding out for the s10u1 beta packages), so I > haven''t submitted it yet - I figured I may as well > piggyback here, in case you''re interested anyway.I''ve filed a bug to track this issue, and established a hypothesis as to the root cause: 6371285 panic when nfs lookup operation attempted on plain file Thanks for the bug report and dump! --matt This message posted from opensolaris.org