similar to: How many "rollback" TXGs in a ring for 4k drives?

Displaying 20 results from an estimated 600 matches similar to: "How many "rollback" TXGs in a ring for 4k drives?"

2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 - are there newer releases?), and have some questions regarding whether it is outdated: 1) On page 16 it has the following phrase (which I think is in general invalid): The value stored in offset is the offset in terms of sectors (512 byte blocks). To find the physical block byte offset from the beginning of a slice,
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process: hydra# zpool import pool: tank id:
2012 Jun 18
1
Restore destroyed snapshot ???
OK, I am a butt-head and accidentally destroyed my last snapshot of a replicated ZFS dataset. The dataset is NOT mounted and other than a resilver going on, there is no I/O going on to this dataset. Is there any way to roll back and get my latest snapshot back? from zpool history -i: 2012-06-18.10:34:00 zfs destroy xxx at 1339668001 2012-06-18.10:34:00 [internal destroy txg:2213852] dataset =
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. -- This message posted from opensolaris.org
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk --- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100 +++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400 @@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What would you do next to try and recover this zfs pool? I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was composed of 4 1.5 TiB disks. One disk is totally dead. Another had SMART errors, but using GNU ddrescue I was able to copy all the data off successfully. I have copied all 3 remaining disks as images using
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, will zpool let me create a new pool with ashift=12 out of the box or will I need to play around with a patched zpool binary (or the iSCSI loopback)? -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
2007 Feb 06
4
The ZFS MOS and how DNODES are stored
ZFS documentation lists snapshot limits on any single file system in a pool at 2**48 snaps, and that seems to logically imply that a snap on a file system does not require an update to the pool?s currently active uberblock. That is to say, that if we take a snapshot of a file system in a pool, and then make any changes to that file system, the copy on write behavior induced by the changes will
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel, Apparently your data is represented by rather small files (thus many small data blocks), so proportion of metadata is relatively high, and your<4k blocks are now using at least 4k disk space. For data with small blocks (a 4k volume on an ashift=12 pool) I saw metadata use up most of my drive - becoming equal to data size. Just for the sake of completeness, I brought up a
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file server in it for learning purposes, and I moved almost all of my data to it. Yesterday, and naturally after no longer having backups of the data in the server, I had a controller failure (SiS 180 (oh, the quality)) and the HDD was considered unplugged. When I noticed a few checksum failures on `zfs status` (including two on
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing 512 byte and 4k sector disks in one pool, or something else? I have seen alot of discussion on the 4k issue but I haven''t understood what the actual problem ZFS has with 4k sectors is. It''s getting harder and harder to find large disks with 512 byte sectors so what should we do? TIA...