good morning, i am testing how many files/sec i can create on zfs, with some script . I carelessly let it run until ... it made my system crash. Is that the expected behaviour? Is it then best practice to always use quota? i am using Solaris 10 6/06 s10s_u2wos_09 SPARC (build 9 of U2) panic[cpu28]/thread=2a101f1dcc0: really out of space 000002a101f1cf70 zfs:zio_write_allocate_gang_members+33c (3004fa90d40, 200, 200, 2, 7bb6b380, 7bb89400) %l0-3: 000003004fa90d58 000000000000ffff 0000000000000002 0000000000000200 %l4-7: 000003004fa9a880 000003004fa9aa00 0000000000000000 0000000000000002 000002a101f1d070 zfs:zio_write_allocate_gang_members+2dc (3004fa90fc0, a00, 200, 400, 7bb6b380, 7bb89400) %l0-3: 000003004fa90fd8 000000000000ffff 0000000000000003 0000000000000400 %l4-7: 000003004fa9aa00 000003004fa9ac00 0000000000000000 0000000000000002 000002a101f1d170 zfs:zio_write_allocate_gang_members+2dc (3004fa91240, 1c00, 800, a00, 7bb6b380, 7bb89400) %l0-3: 000003004fa91258 000000000000ffff 0000000000000003 0000000000000a00 %l4-7: 000003004fa9ac00 000006002fdecf80 0000000000000000 0000000000000002 000002a101f1d270 zfs:zio_write_compress+1ec (3004fa91240, 23e20b, 23e000, 1f001f, 3, 6002fdecf80) %l0-3: 000000000000ffff 000000000000000d 000000000000000e 0000000000001c00 %l4-7: 0000000000000000 00000000001f0000 000000000000fc00 000000000000001f 000002a101f1d340 zfs:arc_write+e4 (3004fa91240, 60004166000, 7, 3, 2, 780) %l0-3: ffffffffffffffff 000000007bb41eec 00000300302a6440 0000030030072c08 %l4-7: 000002a101f1d548 0000000000000004 0000000000000004 00000300302b1180 000002a101f1d450 zfs:zfsctl_ops_root+b7d1eac (300302a6440, 60005a18bc0, 3, 3, 7, 780) %l0-3: 0000060005a5bd00 0000000000000000 0000060005b191c0 00000300302a6548 %l4-7: 000006002fdecf80 0000000000000014 000000000003255f 0000000000000000 000002a101f1d570 zfs:dnode_sync+35c (0, 0, 60005a18bc0, 3003bb5a2f8, 0, 4) %l0-3: 00000300302a6440 0000060005b19218 0000060005b19278 0000060005b19278 %l4-7: 0000000000000000 0000060005b19218 0000000000000000 0000030030287120 000002a101f1d630 zfs:dmu_objset_sync_dnodes+6c (60005a5bd00, 60005a5bde0, 3003bb5a2f8, 60005b191c0, 300064047a8, 0) %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 %l4-7: 0000060005a5bde0 0000000000000000 0000000000000000 0000060005a18bc0 000002a101f1d6e0 zfs:dmu_objset_sync+54 (60005a5bd00, 3003bb5a2f8, 0, 0, 3002af06938, 780) %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 %l4-7: 0000060005a5bde0 0000000000000000 0000060005a5bde0 0000060005a5be60 000002a101f1d7f0 zfs:dsl_dataset_sync+c (6000339a000, 3003bb5a2f8, 6000339a090, 600058e3b38, 600058e3b38, 6000339a000) %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 %l4-7: 00000600058e3be8 00000600058e3bb8 00000600058e3b28 0000000000000000 000002a101f1d8a0 zfs:dsl_pool_sync+104 (600058e3a80, 780, 6000339a000, 3000cf2e648, 60004280b00, 60004280b28) %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 %l4-7: 00000600058e3be8 00000600058e3bb8 00000600058e3b28 0000000000000000 000002a101f1d950 zfs:spa_sync+e4 (60004166000, 780, 60004280b28, 3003bb5a2f8, 60004166178, 2a101f1dcbc) %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 0000000000000000 %l4-7: 0000060005834580 00000600058e3a80 00000600058e3b48 0000000000000000 000002a101f1da00 zfs:txg_sync_thread+134 (600058e3a80, 780, 0, 2a101f1dab0, 600058e3b90, 600058e3b92) %l0-3: 00000600058e3ba0 00000600058e3b50 0000000000000000 00000600058e3b58 %l4-7: 00000600058e3b96 00000600058e3b94 00000600058e3b48 0000000000000606 syncing file systems... done dumping to /dev/dsk/c1t0d0s1, offset 108396544, content: kernel 35% done: 120532 pages dumped, compression ratio 2.37, dump failed: error 28 rebooting... -- ---------------------------------------------------------------------- Erik Vanden Meersch Sun Microsystems Belgium Technology System Engineer Lozenberg 15, B-1932 Zaventem e-mail: erik.vandenmeersch at belgium.sun.com phone: +32-2-7048835 mobile: 0479/950598 --------------------------------------------------------------------- Sun Microsystems BeLux Solution Center Telephone Numbers Change March 15th! Belgium +32 2 71 31 786 Luxembourg +352 401 192 786 --------------------------------------------------------------------- ---------------------------------------------------------------------
I carelessly let it run until ... it made my system crash. Is that the expected behaviour? Not funny ;-) Could be (based solely on the presence of zio_write_allocate_gang_members; no deep analysis) 6411261 busy intent log runs out of space on small pools. -r Eric Vanden Meersch writes: > good morning, > > i am testing how many files/sec i can create on zfs, with some script . > I carelessly let it run until ... it made my system crash. > Is that the expected behaviour? Is it then best practice to always use > quota? > > i am using Solaris 10 6/06 s10s_u2wos_09 SPARC (build 9 of U2) > > > panic[cpu28]/thread=2a101f1dcc0: really out of space > > 000002a101f1cf70 zfs:zio_write_allocate_gang_members+33c (3004fa90d40, > 200, 200, 2, 7bb6b380, 7bb89400) > %l0-3: 000003004fa90d58 000000000000ffff 0000000000000002 0000000000000200 > %l4-7: 000003004fa9a880 000003004fa9aa00 0000000000000000 0000000000000002 > 000002a101f1d070 zfs:zio_write_allocate_gang_members+2dc (3004fa90fc0, > a00, 200, 400, 7bb6b380, 7bb89400) > %l0-3: 000003004fa90fd8 000000000000ffff 0000000000000003 0000000000000400 > %l4-7: 000003004fa9aa00 000003004fa9ac00 0000000000000000 0000000000000002 > 000002a101f1d170 zfs:zio_write_allocate_gang_members+2dc (3004fa91240, > 1c00, 800, a00, 7bb6b380, 7bb89400) > %l0-3: 000003004fa91258 000000000000ffff 0000000000000003 0000000000000a00 > %l4-7: 000003004fa9ac00 000006002fdecf80 0000000000000000 0000000000000002 > 000002a101f1d270 zfs:zio_write_compress+1ec (3004fa91240, 23e20b, > 23e000, 1f001f, 3, 6002fdecf80) > %l0-3: 000000000000ffff 000000000000000d 000000000000000e 0000000000001c00 > %l4-7: 0000000000000000 00000000001f0000 000000000000fc00 000000000000001f > 000002a101f1d340 zfs:arc_write+e4 (3004fa91240, 60004166000, 7, 3, 2, 780) > %l0-3: ffffffffffffffff 000000007bb41eec 00000300302a6440 0000030030072c08 > %l4-7: 000002a101f1d548 0000000000000004 0000000000000004 00000300302b1180 > 000002a101f1d450 zfs:zfsctl_ops_root+b7d1eac (300302a6440, 60005a18bc0, > 3, 3, 7, 780) > %l0-3: 0000060005a5bd00 0000000000000000 0000060005b191c0 00000300302a6548 > %l4-7: 000006002fdecf80 0000000000000014 000000000003255f 0000000000000000 > 000002a101f1d570 zfs:dnode_sync+35c (0, 0, 60005a18bc0, 3003bb5a2f8, 0, 4) > %l0-3: 00000300302a6440 0000060005b19218 0000060005b19278 0000060005b19278 > %l4-7: 0000000000000000 0000060005b19218 0000000000000000 0000030030287120 > 000002a101f1d630 zfs:dmu_objset_sync_dnodes+6c (60005a5bd00, > 60005a5bde0, 3003bb5a2f8, 60005b191c0, 300064047a8, 0) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 > %l4-7: 0000060005a5bde0 0000000000000000 0000000000000000 0000060005a18bc0 > 000002a101f1d6e0 zfs:dmu_objset_sync+54 (60005a5bd00, 3003bb5a2f8, 0, 0, > 3002af06938, 780) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 > %l4-7: 0000060005a5bde0 0000000000000000 0000060005a5bde0 0000060005a5be60 > 000002a101f1d7f0 zfs:dsl_dataset_sync+c (6000339a000, 3003bb5a2f8, > 6000339a090, 600058e3b38, 600058e3b38, 6000339a000) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 > %l4-7: 00000600058e3be8 00000600058e3bb8 00000600058e3b28 0000000000000000 > 000002a101f1d8a0 zfs:dsl_pool_sync+104 (600058e3a80, 780, 6000339a000, > 3000cf2e648, 60004280b00, 60004280b28) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 000003003bb5a2f8 > %l4-7: 00000600058e3be8 00000600058e3bb8 00000600058e3b28 0000000000000000 > 000002a101f1d950 zfs:spa_sync+e4 (60004166000, 780, 60004280b28, > 3003bb5a2f8, 60004166178, 2a101f1dcbc) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000060004166108 0000000000000000 > %l4-7: 0000060005834580 00000600058e3a80 00000600058e3b48 0000000000000000 > 000002a101f1da00 zfs:txg_sync_thread+134 (600058e3a80, 780, 0, > 2a101f1dab0, 600058e3b90, 600058e3b92) > %l0-3: 00000600058e3ba0 00000600058e3b50 0000000000000000 00000600058e3b58 > %l4-7: 00000600058e3b96 00000600058e3b94 00000600058e3b48 0000000000000606 > > syncing file systems... done > dumping to /dev/dsk/c1t0d0s1, offset 108396544, content: kernel > 35% done: 120532 pages dumped, compression ratio 2.37, dump failed: > error 28 > rebooting... > > -- > ---------------------------------------------------------------------- > Erik Vanden Meersch Sun Microsystems Belgium > Technology System Engineer Lozenberg 15, B-1932 Zaventem > e-mail: erik.vandenmeersch at belgium.sun.com phone: +32-2-7048835 > mobile: 0479/950598 > --------------------------------------------------------------------- > Sun Microsystems BeLux Solution Center Telephone Numbers Change March 15th! > > Belgium +32 2 71 31 786 > Luxembourg +352 401 192 786 > > --------------------------------------------------------------------- > --------------------------------------------------------------------- > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jun 08, 2006 at 10:51:24AM +0200, Eric Vanden Meersch wrote:> i am testing how many files/sec i can create on zfs, with some script . > I carelessly let it run until ... it made my system crash.This is a bug. Can you provide a crash dump? --matt