Displaying 20 results from an estimated 200 matches similar to: "Need Help Invalidating Uberblock"
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi,
my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data)
thanks in advance for
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2).
The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list,
and I''d like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2007 Feb 06
4
The ZFS MOS and how DNODES are stored
ZFS documentation lists snapshot limits on any single file system in a pool at 2**48 snaps, and that seems to logically imply that a snap on a file system does not require an update to the pool?s currently active uberblock. That is to say, that if we take a snapshot of a file system in a pool, and then make any changes to that file system, the copy on write behavior induced by the changes will
2009 Jun 30
21
ZFS, power failures, and UPSes
Hello,
I''ve looked around Google and the zfs-discuss archives but have not been
able to find a good answer to this question (and the related questions
that follow it):
How well does ZFS handle unexpected power failures? (e.g. environmental
power failures, power supply dying, etc.)
Does it consistently gracefully recover?
Should having a UPS be considered a (strong) recommendation or
2012 Jun 18
1
Restore destroyed snapshot ???
OK, I am a butt-head and accidentally destroyed my last snapshot of a
replicated ZFS dataset. The dataset is NOT mounted and other than a
resilver going on, there is no I/O going on to this dataset. Is there
any way to roll back and get my latest snapshot back?
from zpool history -i:
2012-06-18.10:34:00 zfs destroy xxx at 1339668001
2012-06-18.10:34:00 [internal destroy txg:2213852] dataset =
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
So drive carefully.
-r
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure.
--
This message posted from opensolaris.org
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2007 Jun 21
9
Undo/reverse zpool create
Hi,
If I add an entire disk to a new pool by doing "zpool create", is this reversible?
I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in another system) can I get this back or is zpool create destructive?
Joubert
This message posted from opensolaris.org
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2012 Feb 29
2
peer probe fails
Hi,
Unable to do peer probe... and unable to figure out whats the
reason from the gluster log.
can someone help ?
1) This is what i was trying...
gluster> peer probe llm19.in.ibm.com
Probe unsuccessful
Probe returned with unknown errno 107
gluster> peer probe 9.124.111.25
Probe unsuccessful
Probe returned with unknown errno 107
gluster> peer status
Number of Peers: 1
Hostname:
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based
storage (for failover, cost, ease of expansion, and so on). As part of
this we would like to use multipathing for extra reliability, and I am
not sure how we want to configure it.
Our iSCSI backend only supports multiple sessions per target, not
multiple connections per session (and my understanding is that the
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2007 Jan 10
0
ZFS and HDS ShadowImage
Hi Derek,
Here''s the latest email I''ve received from the zfs-discuss alias.
------------- Begin Forwarded Message -------------
Date: Mon, 18 Sep 2006 23:55:27 -0400
From: Jonathan Edwards <Jonathan.Edwards@sun.com>
Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage
To: Eric Schrock <eric.schrock@sun.com>
Cc: zfs-discuss@opensolaris.org, Torrey McMahon
2006 Apr 21
2
omega on debian 0.9.5
Hi,
I've tested omega (cgi) on several sql databases on my local machine in
which runs Ubuntu, using the version 0.9.4 (installed using apt), and I was
really satisfied for the results I obtained. I customized a bit the "query"
template.
But once I installed omega on the server (using apt for Debian, and omega
version 0.9.5), I discovered that the relevances didn't work, as
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi,
Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was
upgraded from 9.2-RELEASE?
I have two servers, with very different hardware (on is with soft raid
and the other have not) and after a zpool upgrade, no way to get the
server booting.
Do I miss something when upgrading?
I cannot get the error message for the moment. I reinstalled the raid
server under Linux and the other