Displaying 8 results from an estimated 8 matches for "spacemaps".
Did you mean:
spacemap
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss to much time and money with this experimental filesystem.
My version is Zpool
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation.
I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0
I found workaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator tool, but I can see that it is not easy.
When ZFS pool is fragmented then:
1. spa_sync function is
2010 Jul 24
2
Severe ZFS corruption, help needed.
I''m running FreeBSD 8.1 with ZFS v15. Recently some time after moving my mirrored pool from one device to another system crashes. From that time on zpool cannot be used/imported - any attempt fails with:
solaris assert: sm->space + size <= sm->size, file: /usr/src/sys/moules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c, line: 93
Debugging reveals that:
2008 Feb 18
4
ZFS error handling - suggestion
Howdy,
I have at several times had issues with consumer grade PC hardware and ZFS not getting along. The problem is not the disks but the fact I dont have ECC and end to end checking on the datapath. What is happening is that random memory errors and bit flips are written out to disk and when read back again ZFS reports it as a checksum failure:
pool: myth
state: ONLINE
status: One or more
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2007 Sep 04
23
I/O freeze after a disk failure
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52):
Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -