search for: space_map

Displaying 15 results from an estimated 15 matches for "space_map".

2007 Nov 14
0
space_map.c ''ss == NULL'' panic strikes back.
Hi. Someone currently reported a ''ss == NULL'' panic in space_map.c/space_map_add() on FreeBSD''s version of ZFS. I found that this problem was previously reported on Solaris and is already fixed. I verified it and FreeBSD''s version have this fix in place... http://src.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/uts/common/fs/zfs/space_...
2011 Feb 16
0
ZFS space_map
Hello all, I am trying to understand how the allocation of space_map happens. What I am trying to figure out is how the recursive part is handled. From what I understand a new allocation (say appending to a file) will cause the space map to change by appending more allocs that will require extra space on disk and as such will change the space map again. I understand...
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239() space_map_load+0x17d() metaslab_activate+0x6f() metaslab_group_alloc+0x187() metaslab_alloc_dva+0xab() metaslab_alloc+0x51()...
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS: My server no more reboots because the ZFS spacemap is again corrupt. I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive". Did it copied corrupt spacemap?! For me its now terminated. I loss to much time and money with this experimental filesystem. My version is Zpool
2008 May 26
2
indiana as nfs server: crash due to zfs
...rashed, and i got this messages: " May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice] May 22 02:18:57 ultra20 ^Mpanic[cpu0]/thread=ffffff0003d06c80: May 22 02:18:57 ultra20 genunix: [ID 603766 kern.notice] assertion failed: sm->sm_space == 0 (0x40000000 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 315 May 22 02:18:57 ultra20 unix: [ID 100000 kern.notice] May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06830 genunix:assfail3+b9 () May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d068e0 zfs:space_map_load+2c2 () May 22 02:18:57 ultra20 genunix: [I...
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
...zfs/NB60/nb60openv mount -F zfs zfspool01/nb60openv /zfs/NB60/nb60openv The mount command now causes a panic: zfs: WARNING: ZFS replay transaction error 5, dataset zfspool01/nb60openv, seq 0x4180eb0, txtype 9 panic[cpu1]/thread=2a100b75cc0: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 000002a100b74c40 genunix:assfail+74 (7b252450, 7b252460, 7d, 183d400, 11eb000, 0) %l0-3: 0000000000000000 0000000000000000 00000000011e5368 000003000b6d2528 %l4-7: 00000000011eb000 0000000000000000 000000000186f800 0000000000000000 000002a100b74cf0 zfs:space_map_remove+b8 (60001db...
2007 Sep 19
3
ZFS panic when trying to import pool
...;pool> it panic. I also dd the disk and tested on another server with OpenSolaris B72 and still the same thing. Here is the panic backtrace: Stack Backtrace ----------------- vpanic() assfail3+0xb9(fffffffff7dde5f0, 6, fffffffff7dde840, 0, fffffffff7dde820, 153) space_map_load+0x2ef(ffffff008f1290b8, ffffffffc00fc5b0, 1, ffffff008f128d88, ffffff008dd58ab0) metaslab_activate+0x66(ffffff008f128d80, 8000000000000000) metaslab_group_alloc+0x24e(ffffff008f46bcc0, 400, 3fd0f1, 32dc18000, ffffff008fbeaa80, 0) metaslab_alloc_dva+0x192(ffffff008f2d1a80, ffffff008f235730, 200...
2007 Sep 14
3
space allocation vs. thin provisioning
...p; more detailed questions: In Jeff Bonwick''s blog[1], he talks about free space management and metaslabs. Of particular interest is the statement: "ZFS divides the space on each virtual device into a few hundred regions called metaslabs." 1. http://blogs.sun.com/bonwick/entry/space_maps In Hu Yoshida''s (CTO, Hitachi Data Systems) blog[2] there is a discussion of thin provisioning at the enterprise array level. Of particular interest is the statement: "Dynamic Provisioning is not a panacea for all our storage woes. There are applications that do a hard format or wri...
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
...n''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142 unix: [ID 100000 kern.notice] Mar 21 11:09:17 SERVER142 genunix: [ID 802836 kern.notice] fffffe800047e320 fffffffffb9ad0b9 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e3a0 zfs:space_map_remove+1a3 () Mar 21 11:09:17 SERVER142 genu...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2010 Jul 24
2
Severe ZFS corruption, help needed.
...tly some time after moving my mirrored pool from one device to another system crashes. From that time on zpool cannot be used/imported - any attempt fails with: solaris assert: sm->space + size &lt;= sm->size, file: /usr/src/sys/moules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c, line: 93 Debugging reveals that: sm->sm_space = 2147483648 size = 34304 sm->sm_size = 2147483648 Probably space map is trashed badly but I can''t understand how to skip this assertion to make pool at least readable. FreeBSD zfsboot manages to load kernel and modules off the pool,...
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
...CC: zfs-crypto-discuss at opensolaris.org Estimated Hours: 0.0 dhcp-50-235 console login: NOTICE: zio_read_decrypt: called with crypt = 0 on objset = 0 panic[cpu0]/thread=ffffff0004469c80: assertion failed: sm->sm_space == space (0x20000000 == 0x1ff8a000), file: ../../common/fs/zfs/space_map.c, line: 359 ffffff00044697f0 genunix:assfail3+b9 () ffffff00044698a0 zfs:space_map_load+3a6 () ffffff00044698f0 zfs:metaslab_activate+93 () ffffff00044699b0 zfs:metaslab_group_alloc+24e () ffffff0004469a70 zfs:metaslab_alloc_dva+200 () ffffff0004469b30 zfs:metaslab_alloc+156 () ffffff0004469b90 z...
2008 Dec 15
15
Need Help Invalidating Uberblock
...backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 319 <system reboots> I''ve booted single user, moved /etc/zfs/zpool.cache out of the way, and now have access to the pool from the command line. However zdb fails with a similar assertion. root at kestrel:/opt$ zdb -U -bcv zones Traversing all blocks to verify checksums and...
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs? This message posted from opensolaris.org
2006 Jul 03
8
[raidz] file not removed: No space left on device
On a system still running nv_30, I''ve a small RaidZ filled to the brim: 2 3 root at mir pts/9 ~ 78# uname -a SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP 0 3 root at mir pts/9 ~ 50# zfs list NAME USED AVAIL REFER MOUNTPOINT mirpool1 33.6G 0 137K /mirpool1 mirpool1/home 12.3G 0 12.3G /export/home mirpool1/install 12.9G