similar to: Migrating zpool to new drives with 4K Sectors

Displaying 20 results from an estimated 1000 matches similar to: "Migrating zpool to new drives with 4K Sectors"

2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing 512 byte and 4k sector disks in one pool, or something else? I have seen alot of discussion on the 4k issue but I haven''t understood what the actual problem ZFS has with 4k sectors is. It''s getting harder and harder to find large disks with 512 byte sectors so what should we do? TIA...
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2011 Jul 13
4
How about 4KB disk sectors?
So, what is the story about 4KB disk sectors? Should such disks be avoided with ZFS? Or, no problem? Or, need to modify some config file before usage? -- This message posted from opensolaris.org
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, will zpool let me create a new pool with ashift=12 out of the box or will I need to play around with a patched zpool binary (or the iSCSI loopback)? -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list, and I''d like someone to confirm-or-reject the discussed statement. Paraphrasing in my words and understanding: "Labels, including Uberblock rings, are fixed 256KB in size each, of which 128KB is the UB ring. Normally there is 1KB of data in one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one.... Illumos/Openindiana let me add a device as a hot spare that evidently has a different sector alignment than all of the other drives in the array. So now I''m at the point that I /need/ a hot spare and it doesn''t look like I have it. And, worse, the other spares I have are all the same model as said hot spare. Is there anything I can do with this or
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel, Apparently your data is represented by rather small files (thus many small data blocks), so proportion of metadata is relatively high, and your<4k blocks are now using at least 4k disk space. For data with small blocks (a 4k volume on an ashift=12 pool) I saw metadata use up most of my drive - becoming equal to data size. Just for the sake of completeness, I brought up a
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after loading stage2, and doesn''t recognise the fstype when examining the disk loaded from an alernate source. This is with SX-151. Here''s hoping a future version (with grub2?) resolves this, as well as lets us boot from raidz. Just a note for the archives in case it helps someone else get back the afternoon
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other
2012 Nov 13
9
Intel DC S3700
[This email is either empty or too large to be displayed at this time]
2010 Nov 11
3
Booting fails with `Can not read the pool label'' error
I''m still trying to find a fix/workaround for the problem described in Unable to mount root pool dataset http://opensolaris.org/jive/thread.jspa?messageID=492460 Since the Blade 1500''s rpool is mirrored, I''ve decided to detach the second half of the mirror, relabel the disk, create an alternative rpool (rpool2) there, copy the current BE (snv_134) using beadm
2010 May 02
8
zpool mirror (dumb question)
Hi there! I am new to the list, and to OpenSolaris, as well as ZPS. I am creating a zpool/zfs to use on my NAS server, and basically I want some redundancy for my files/media. What I am looking to do, is get a bunch of 2TB drives, and mount them mirrored, and in a zpool so that I don''t have to worry about running out of room. (I know, pretty typical I guess). My problem is, is that
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()