Displaying 20 results from an estimated 500 matches similar to: "ZFS Space Map optimalization"
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum,
I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space" showed figures at
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error:
zpool status -v
pool: local
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
2010 Jun 02
11
ZFS recovery tools
Hi,
I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to
learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks
to some great forum posts from Victor Latushkin, however without his posts I would still be crying
at night...
I think the worst example is the zdb man page, which all it does is to ask you
2007 Sep 13
2
zpool versioning
Hi,
I was wondering if anyone would know if this is just an accounting-type
error with the recorded "version=" stored on disk, or if there
are/could-be any deeper issues with an "upgraded" zpool?
I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correctly
reported the pool as a "version=3" pool. I reinstalled the OS with a u4
(08/07), ran zpool
2009 Mar 01
8
invalid vdev configuration after power failure
What does it mean for a vdev to have an invalid configuration and how
can it be fixed or reset? As you can see, the following pool can no
longer be imported: (Note that the "last accessed by another system"
warning is because I moved these drives to my test workstation.)
~$ zpool import -f pool0
cannot import ''pool0'': invalid vdev configuration
~$ zpool import
pool:
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2008 May 04
2
Inconcistancies with scrub and zdb
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other pool is another story.
First lesson (I think) is you should scrub your pools, at least those
backed by
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list,
I posted this a few days ago on opensolaris-discuss@ list
I am posting here, because there my be too much noise on other lists
I have been without this zfs set for a week now.
My main concern at this point,is it even possible to recover this zpool.
How does the metadata work? what tool could is use to rebuild the
corrupted parts
or even find out what parts are corrupted.
most but
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo.
The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie.
I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated.
------------------- EMAIL -------------------
List of faulty resources:
2009 Aug 02
2
zdb assertion failure/zpool recovery
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
I have a corrupt pool, which lives on a .vdi file of a VirtualBox. IIRC
the corruption (i.e. pool being not importable) was caused when I killed
virtual box, because it was hung.
This pool consists of a single vdev and I would really like to get some
files out of that thing. So I tried running zdb, but this fails with an
assertion failure:
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I would like to know the blocksize of a particular file. I know the
blocksize for a particular file is decided at creation time, in fuction
of the write size done and the recordsize property of the dataset.
How can I access that information?. Some zdb magic?.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at
2009 Jan 15
2
zfs drive keeps failing between export and import
I have a zpool that consists for a two-drive mirror. The two times I
took the zpool offline, I had to resilver one of the drives (the same
drive both times) when I imported it back. All drives in the pool
show no read, write, or checksum errors and are new, so I'm looking to
a software problem before hardware. Both drives are encrypted geli
devices. I tried to reproduce the error with 1GB
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My