Displaying 20 results from an estimated 1200 matches similar to: "Clarifications wanted for ZFS spec"
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list,
and I''d like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2011 Jan 04
0
zpool import hangs system
Hello,
I''ve been using Nexentastore Community Edition with no issues now for a
while now, however last week I was going to rebuild a different system so I
started to copy all the data off that to my to a raidz2 volume on me CE
system. This was going fine until I noticed that they copy was stalled, as
well as the entire system was non-responsive. I let it sit for several hours
with no
2010 Jul 16
6
Lost zpool after reboot
Hello,
I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows.
Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all,
I am not sure my original mail got through to the list
(I haven''t received it back), so I attach it below.
Anyhow, now I have a saved kernel crash dump of the system
panicking when it tries to - I believe - deferred-release
the corrupted deduped blocks which are no longer referenced
by the userdata/blockpointer tree.
As I previously wrote in my thread on unfixeable
2007 Mar 30
0
On disk SMI & EFI label documentation
I''m not sure if this alias is only to discuss the Solaris ZFS
implementation or others. I''m writing my own ZFS code from scratch in
Java. I''ll skip the reasons why I''m doing this in Java, lets just assume
I have some.
Is there any good documentation on the disk label structures? Right now
my code is just reading the ZFS labels and the nvlist data but when
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2010 Feb 24
0
disks in zpool gone at the same time
Hi,
Yesterday I got all my disks in two zpool disconected.
They are not real disks - LUNS from StorageTek 2530 array.
What could that be - a failing LSI card or a mpt driver in 2009.06?
After reboot got four disks in FAILED state - zpool clear fixed
things with resilvering.
Here is how it started (/var/adm/messages)
Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info]
/pci at 0,0/pci10de,5d at
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space" showed figures at
2007 Jan 10
0
ZFS and HDS ShadowImage
Hi Derek,
Here''s the latest email I''ve received from the zfs-discuss alias.
------------- Begin Forwarded Message -------------
Date: Mon, 18 Sep 2006 23:55:27 -0400
From: Jonathan Edwards <Jonathan.Edwards@sun.com>
Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage
To: Eric Schrock <eric.schrock@sun.com>
Cc: zfs-discuss@opensolaris.org, Torrey McMahon
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel,
Apparently your data is represented by rather small files (thus
many small data blocks), so proportion of metadata is relatively
high, and your<4k blocks are now using at least 4k disk space.
For data with small blocks (a 4k volume on an ashift=12 pool)
I saw metadata use up most of my drive - becoming equal to
data size.
Just for the sake of completeness, I brought up a
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
Hello,
I am a final year computer engg student and I am planning to implement
zfs on linux,
I have gone through the articles posted on solaris . Please let me
know about the
feasibility of zfs to be implemented on linux.
waiting for valuable replies.
thanks in advance.
On 9/14/07, zfs-discuss-request at opensolaris.org
<zfs-discuss-request at opensolaris.org> wrote:
> Send