search for: spacemap

Displaying 8 results from an estimated 8 matches for "spacemap".

2010 Nov 11
8
zpool import panics
...Uberblock: magic = 0000000000bab10c version = 22 txg = 464265 guid_sum = 6102932533008274672 timestamp = 1289468487 UTC = Thu Nov 11 10:41:27 2010 All DDTs are empty Metaslabs: vdev 0 metaslabs 101 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 0 offset 0 spacemap 26 free 0 segments 0 maxsize 0 freepct 0% metaslab 1 offset 400...
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS: My server no more reboots because the ZFS spacemap is again corrupt. I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive". Did it copied corrupt spacemap?! For me its now terminated. I loss to much time and money with this experimental filesystem. My version...
2007 Jul 10
1
ZFS pool fragmentation
...d block we need ( for example 128k ), pool schould remember this for some time ( 5 minutes ) and stop asking for this kind of blocks. 2. We should be more careful with unloading space maps. At the end of sync phase space maps for metaslabs without active flag are unloaded. On my fragmented pool spacemap with 800MB space available ( from 2GB ) is unloaded because there was no 128K blocks. This message posted from opensolaris.org
2010 Jul 24
2
Severe ZFS corruption, help needed.
...ol. Error is persistent and shows up in: * MilaX - hangs; * OpenSolaris - hangs; * SystemRescueCD (zfs-fuse v23) - drops core with same message; * NexentsStore - ignores pool. What I am looking for is: * any information on how I can bring this pool to readonly state (tweaking source, making spacemap appear correct/full); * any pointers to tech specifications of how name/value pair list on-disk structure should be processed as it takes a lot time for me to understand how this should work rummaging through all the code involved. -- This message posted from opensolaris.org
2008 Feb 18
4
ZFS error handling - suggestion
Howdy, I have at several times had issues with consumer grade PC hardware and ZFS not getting along. The problem is not the disks but the fact I dont have ECC and end to end checking on the datapath. What is happening is that random memory errors and bit flips are written out to disk and when read back again ZFS reports it as a checksum failure: pool: myth state: ONLINE status: One or more
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through). I have heard here and there that there might be in development a plan to make it such that a raid-z can grow its "raid-z''ness" to accommodate a new disk added to it. Example: I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on space, and would like to add a 5th disk. The idea is to pop in disk 5 and have
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 zpool and zfs report different free space because zfs takes into account an internal reservation of 32MB or 1/64 of the capacity of the pool, what is bigger. So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit excessive to me... - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es -